00:00:00.001 Started by upstream project "autotest-per-patch" build number 120987 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.077 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.078 The recommended git tool is: git 00:00:00.078 using credential 00000000-0000-0000-0000-000000000002 00:00:00.080 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.098 Fetching changes from the remote Git repository 00:00:00.101 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.128 Using shallow fetch with depth 1 00:00:00.128 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.128 > git --version # timeout=10 00:00:00.152 > git --version # 'git version 2.39.2' 00:00:00.152 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.153 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.153 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.929 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.940 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.950 Checking out Revision 6e1fadd1eee50389429f9abb33dde5face8ca717 (FETCH_HEAD) 00:00:04.950 > git config core.sparsecheckout # timeout=10 00:00:04.960 > git read-tree -mu HEAD # timeout=10 00:00:04.974 > git checkout -f 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=5 00:00:04.991 Commit message: "pool: attach build logs for failed merge builds" 00:00:04.991 > git rev-list --no-walk 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=10 00:00:05.093 [Pipeline] Start of Pipeline 00:00:05.110 [Pipeline] library 00:00:05.112 Loading library shm_lib@master 00:00:05.112 Library shm_lib@master is cached. Copying from home. 00:00:05.133 [Pipeline] node 00:00:05.145 Running on WFP5 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.147 [Pipeline] { 00:00:05.159 [Pipeline] catchError 00:00:05.161 [Pipeline] { 00:00:05.178 [Pipeline] wrap 00:00:05.190 [Pipeline] { 00:00:05.199 [Pipeline] stage 00:00:05.201 [Pipeline] { (Prologue) 00:00:05.384 [Pipeline] sh 00:00:05.663 + logger -p user.info -t JENKINS-CI 00:00:05.686 [Pipeline] echo 00:00:05.687 Node: WFP5 00:00:05.695 [Pipeline] sh 00:00:05.987 [Pipeline] setCustomBuildProperty 00:00:05.996 [Pipeline] echo 00:00:05.997 Cleanup processes 00:00:06.000 [Pipeline] sh 00:00:06.277 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.277 2814658 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.290 [Pipeline] sh 00:00:06.572 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.572 ++ grep -v 'sudo pgrep' 00:00:06.572 ++ awk '{print $1}' 00:00:06.572 + sudo kill -9 00:00:06.572 + true 00:00:06.593 [Pipeline] cleanWs 00:00:06.603 [WS-CLEANUP] Deleting project workspace... 00:00:06.603 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.609 [WS-CLEANUP] done 00:00:06.614 [Pipeline] setCustomBuildProperty 00:00:06.633 [Pipeline] sh 00:00:06.915 + sudo git config --global --replace-all safe.directory '*' 00:00:06.985 [Pipeline] nodesByLabel 00:00:06.986 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.995 [Pipeline] httpRequest 00:00:07.000 HttpMethod: GET 00:00:07.000 URL: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:07.001 Sending request to url: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:07.012 Response Code: HTTP/1.1 200 OK 00:00:07.012 Success: Status code 200 is in the accepted range: 200,404 00:00:07.013 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:08.925 [Pipeline] sh 00:00:09.203 + tar --no-same-owner -xf jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:09.223 [Pipeline] httpRequest 00:00:09.227 HttpMethod: GET 00:00:09.227 URL: http://10.211.164.96/packages/spdk_0d1f30fbf8d2a002d60a3252a65d4ffbff392cdb.tar.gz 00:00:09.228 Sending request to url: http://10.211.164.96/packages/spdk_0d1f30fbf8d2a002d60a3252a65d4ffbff392cdb.tar.gz 00:00:09.243 Response Code: HTTP/1.1 200 OK 00:00:09.244 Success: Status code 200 is in the accepted range: 200,404 00:00:09.244 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_0d1f30fbf8d2a002d60a3252a65d4ffbff392cdb.tar.gz 00:00:35.712 [Pipeline] sh 00:00:35.998 + tar --no-same-owner -xf spdk_0d1f30fbf8d2a002d60a3252a65d4ffbff392cdb.tar.gz 00:00:38.547 [Pipeline] sh 00:00:38.826 + git -C spdk log --oneline -n5 00:00:38.826 0d1f30fbf sma: add listener check on vfio device creation 00:00:38.826 129e6ba3b test/nvmf: add missing remove listener discovery 00:00:38.827 38dca48f0 libvfio-user: update submodule to point to `spdk` branch 00:00:38.827 7a71abf69 fuzz/llvm_vfio_fuzz: limit length of generated data to `bytes_per_cmd` 00:00:38.827 fe11fef3a fuzz/llvm_vfio_fuzz: fix `fuzz_vfio_user_irq_set` incorrect data length 00:00:38.839 [Pipeline] } 00:00:38.856 [Pipeline] // stage 00:00:38.864 [Pipeline] stage 00:00:38.866 [Pipeline] { (Prepare) 00:00:38.886 [Pipeline] writeFile 00:00:38.905 [Pipeline] sh 00:00:39.185 + logger -p user.info -t JENKINS-CI 00:00:39.198 [Pipeline] sh 00:00:39.477 + logger -p user.info -t JENKINS-CI 00:00:39.488 [Pipeline] sh 00:00:39.970 + cat autorun-spdk.conf 00:00:39.970 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.970 SPDK_TEST_NVMF=1 00:00:39.970 SPDK_TEST_NVME_CLI=1 00:00:39.970 SPDK_TEST_NVMF_NICS=mlx5 00:00:39.970 SPDK_RUN_UBSAN=1 00:00:39.970 NET_TYPE=phy 00:00:39.977 RUN_NIGHTLY=0 00:00:39.983 [Pipeline] readFile 00:00:40.002 [Pipeline] withEnv 00:00:40.004 [Pipeline] { 00:00:40.015 [Pipeline] sh 00:00:40.293 + set -ex 00:00:40.293 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:40.293 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:40.293 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.293 ++ SPDK_TEST_NVMF=1 00:00:40.293 ++ SPDK_TEST_NVME_CLI=1 00:00:40.293 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:40.293 ++ SPDK_RUN_UBSAN=1 00:00:40.293 ++ NET_TYPE=phy 00:00:40.293 ++ RUN_NIGHTLY=0 00:00:40.293 + case $SPDK_TEST_NVMF_NICS in 00:00:40.293 + DRIVERS=mlx5_ib 00:00:40.293 + [[ -n mlx5_ib ]] 00:00:40.293 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:40.293 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:46.861 rmmod: ERROR: Module irdma is not currently loaded 00:00:46.861 rmmod: ERROR: Module i40iw is not currently loaded 00:00:46.861 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:46.861 + true 00:00:46.861 + for D in $DRIVERS 00:00:46.861 + sudo modprobe mlx5_ib 00:00:46.861 + exit 0 00:00:46.871 [Pipeline] } 00:00:46.889 [Pipeline] // withEnv 00:00:46.894 [Pipeline] } 00:00:46.911 [Pipeline] // stage 00:00:46.921 [Pipeline] catchError 00:00:46.923 [Pipeline] { 00:00:46.937 [Pipeline] timeout 00:00:46.937 Timeout set to expire in 40 min 00:00:46.939 [Pipeline] { 00:00:46.958 [Pipeline] stage 00:00:46.960 [Pipeline] { (Tests) 00:00:46.974 [Pipeline] sh 00:00:47.254 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:00:47.254 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:00:47.254 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:00:47.254 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:00:47.254 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:47.254 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:00:47.254 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:00:47.254 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:47.254 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:00:47.254 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:47.254 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:00:47.254 + source /etc/os-release 00:00:47.254 ++ NAME='Fedora Linux' 00:00:47.254 ++ VERSION='38 (Cloud Edition)' 00:00:47.254 ++ ID=fedora 00:00:47.254 ++ VERSION_ID=38 00:00:47.254 ++ VERSION_CODENAME= 00:00:47.254 ++ PLATFORM_ID=platform:f38 00:00:47.254 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:47.254 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:47.254 ++ LOGO=fedora-logo-icon 00:00:47.254 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:47.254 ++ HOME_URL=https://fedoraproject.org/ 00:00:47.254 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:47.254 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:47.254 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:47.254 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:47.254 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:47.254 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:47.254 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:47.254 ++ SUPPORT_END=2024-05-14 00:00:47.254 ++ VARIANT='Cloud Edition' 00:00:47.254 ++ VARIANT_ID=cloud 00:00:47.254 + uname -a 00:00:47.254 Linux spdk-wfp-05 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:47.254 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:00:49.785 Hugepages 00:00:49.785 node hugesize free / total 00:00:49.785 node0 1048576kB 0 / 0 00:00:49.785 node0 2048kB 0 / 0 00:00:49.785 node1 1048576kB 0 / 0 00:00:49.785 node1 2048kB 0 / 0 00:00:49.785 00:00:49.785 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:49.785 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:49.785 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:49.785 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:49.785 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:49.785 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:49.785 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:49.785 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:49.785 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:49.785 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:49.785 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:49.785 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:49.785 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:49.785 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:49.785 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:49.785 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:49.785 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:49.785 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:49.785 + rm -f /tmp/spdk-ld-path 00:00:49.785 + source autorun-spdk.conf 00:00:49.785 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.785 ++ SPDK_TEST_NVMF=1 00:00:49.785 ++ SPDK_TEST_NVME_CLI=1 00:00:49.785 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:49.785 ++ SPDK_RUN_UBSAN=1 00:00:49.785 ++ NET_TYPE=phy 00:00:49.785 ++ RUN_NIGHTLY=0 00:00:49.785 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:49.785 + [[ -n '' ]] 00:00:49.785 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:49.785 + for M in /var/spdk/build-*-manifest.txt 00:00:49.785 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:49.785 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:49.785 + for M in /var/spdk/build-*-manifest.txt 00:00:49.785 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:49.785 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:49.785 ++ uname 00:00:49.785 + [[ Linux == \L\i\n\u\x ]] 00:00:49.785 + sudo dmesg -T 00:00:49.785 + sudo dmesg --clear 00:00:49.785 + dmesg_pid=2815582 00:00:49.785 + [[ Fedora Linux == FreeBSD ]] 00:00:49.785 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:49.785 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:49.785 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:49.785 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:49.785 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:49.785 + [[ -x /usr/src/fio-static/fio ]] 00:00:49.785 + export FIO_BIN=/usr/src/fio-static/fio 00:00:49.785 + FIO_BIN=/usr/src/fio-static/fio 00:00:49.785 + sudo dmesg -Tw 00:00:49.785 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:49.785 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:49.785 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:49.785 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:49.785 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:49.785 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:49.785 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:49.785 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:49.785 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:49.785 Test configuration: 00:00:49.785 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.785 SPDK_TEST_NVMF=1 00:00:49.785 SPDK_TEST_NVME_CLI=1 00:00:49.785 SPDK_TEST_NVMF_NICS=mlx5 00:00:49.785 SPDK_RUN_UBSAN=1 00:00:49.785 NET_TYPE=phy 00:00:50.044 RUN_NIGHTLY=0 17:04:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:00:50.044 17:04:59 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:50.044 17:04:59 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:50.044 17:04:59 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:50.044 17:04:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.044 17:04:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.044 17:04:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.044 17:04:59 -- paths/export.sh@5 -- $ export PATH 00:00:50.044 17:04:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.044 17:04:59 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:00:50.044 17:04:59 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:50.044 17:04:59 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713971099.XXXXXX 00:00:50.044 17:04:59 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713971099.ffttWB 00:00:50.044 17:04:59 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:50.044 17:04:59 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:50.044 17:04:59 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:00:50.044 17:04:59 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:50.044 17:04:59 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:50.044 17:04:59 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:50.044 17:04:59 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:50.044 17:04:59 -- common/autotest_common.sh@10 -- $ set +x 00:00:50.044 17:04:59 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:00:50.044 17:04:59 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:50.044 17:04:59 -- pm/common@17 -- $ local monitor 00:00:50.044 17:04:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.044 17:04:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2815616 00:00:50.044 17:04:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.044 17:04:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2815617 00:00:50.044 17:04:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.044 17:04:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2815619 00:00:50.044 17:04:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.044 17:04:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2815621 00:00:50.044 17:04:59 -- pm/common@26 -- $ sleep 1 00:00:50.044 17:04:59 -- pm/common@21 -- $ date +%s 00:00:50.044 17:04:59 -- pm/common@21 -- $ date +%s 00:00:50.044 17:04:59 -- pm/common@21 -- $ date +%s 00:00:50.044 17:04:59 -- pm/common@21 -- $ date +%s 00:00:50.044 17:04:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713971099 00:00:50.044 17:04:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713971099 00:00:50.044 17:04:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713971099 00:00:50.044 17:04:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713971099 00:00:50.044 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713971099_collect-bmc-pm.bmc.pm.log 00:00:50.044 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713971099_collect-cpu-load.pm.log 00:00:50.044 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713971099_collect-vmstat.pm.log 00:00:50.044 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713971099_collect-cpu-temp.pm.log 00:00:50.981 17:05:00 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:50.981 17:05:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:50.981 17:05:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:50.981 17:05:00 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:50.981 17:05:00 -- spdk/autobuild.sh@16 -- $ date -u 00:00:50.981 Wed Apr 24 03:05:00 PM UTC 2024 00:00:50.981 17:05:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:50.981 v24.05-pre-412-g0d1f30fbf 00:00:50.981 17:05:00 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:50.981 17:05:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:50.981 17:05:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:50.981 17:05:00 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:50.981 17:05:00 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:50.981 17:05:00 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.239 ************************************ 00:00:51.239 START TEST ubsan 00:00:51.239 ************************************ 00:00:51.239 17:05:00 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:51.239 using ubsan 00:00:51.239 00:00:51.239 real 0m0.000s 00:00:51.239 user 0m0.000s 00:00:51.239 sys 0m0.000s 00:00:51.239 17:05:00 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:51.239 17:05:00 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.239 ************************************ 00:00:51.239 END TEST ubsan 00:00:51.239 ************************************ 00:00:51.239 17:05:00 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:51.239 17:05:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:51.239 17:05:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:51.239 17:05:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:51.239 17:05:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:51.239 17:05:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:51.239 17:05:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:51.239 17:05:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:51.239 17:05:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:51.239 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:00:51.239 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:00:51.498 Using 'verbs' RDMA provider 00:01:04.636 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:16.887 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:16.887 Creating mk/config.mk...done. 00:01:16.887 Creating mk/cc.flags.mk...done. 00:01:16.887 Type 'make' to build. 00:01:16.887 17:05:24 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:16.887 17:05:24 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:16.887 17:05:24 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:16.887 17:05:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.887 ************************************ 00:01:16.887 START TEST make 00:01:16.887 ************************************ 00:01:16.887 17:05:24 -- common/autotest_common.sh@1111 -- $ make -j96 00:01:16.887 make[1]: Nothing to be done for 'all'. 00:01:23.438 The Meson build system 00:01:23.438 Version: 1.3.1 00:01:23.438 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:23.438 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:23.438 Build type: native build 00:01:23.438 Program cat found: YES (/usr/bin/cat) 00:01:23.438 Project name: DPDK 00:01:23.438 Project version: 23.11.0 00:01:23.438 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:23.438 C linker for the host machine: cc ld.bfd 2.39-16 00:01:23.438 Host machine cpu family: x86_64 00:01:23.438 Host machine cpu: x86_64 00:01:23.438 Message: ## Building in Developer Mode ## 00:01:23.438 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:23.438 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:23.438 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:23.438 Program python3 found: YES (/usr/bin/python3) 00:01:23.438 Program cat found: YES (/usr/bin/cat) 00:01:23.438 Compiler for C supports arguments -march=native: YES 00:01:23.438 Checking for size of "void *" : 8 00:01:23.438 Checking for size of "void *" : 8 (cached) 00:01:23.438 Library m found: YES 00:01:23.438 Library numa found: YES 00:01:23.438 Has header "numaif.h" : YES 00:01:23.438 Library fdt found: NO 00:01:23.439 Library execinfo found: NO 00:01:23.439 Has header "execinfo.h" : YES 00:01:23.439 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:23.439 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:23.439 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:23.439 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:23.439 Run-time dependency openssl found: YES 3.0.9 00:01:23.439 Run-time dependency libpcap found: YES 1.10.4 00:01:23.439 Has header "pcap.h" with dependency libpcap: YES 00:01:23.439 Compiler for C supports arguments -Wcast-qual: YES 00:01:23.439 Compiler for C supports arguments -Wdeprecated: YES 00:01:23.439 Compiler for C supports arguments -Wformat: YES 00:01:23.439 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:23.439 Compiler for C supports arguments -Wformat-security: NO 00:01:23.439 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:23.439 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:23.439 Compiler for C supports arguments -Wnested-externs: YES 00:01:23.439 Compiler for C supports arguments -Wold-style-definition: YES 00:01:23.439 Compiler for C supports arguments -Wpointer-arith: YES 00:01:23.439 Compiler for C supports arguments -Wsign-compare: YES 00:01:23.439 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:23.439 Compiler for C supports arguments -Wundef: YES 00:01:23.439 Compiler for C supports arguments -Wwrite-strings: YES 00:01:23.439 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:23.439 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:23.439 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:23.439 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:23.439 Program objdump found: YES (/usr/bin/objdump) 00:01:23.439 Compiler for C supports arguments -mavx512f: YES 00:01:23.439 Checking if "AVX512 checking" compiles: YES 00:01:23.439 Fetching value of define "__SSE4_2__" : 1 00:01:23.439 Fetching value of define "__AES__" : 1 00:01:23.439 Fetching value of define "__AVX__" : 1 00:01:23.439 Fetching value of define "__AVX2__" : 1 00:01:23.439 Fetching value of define "__AVX512BW__" : 1 00:01:23.439 Fetching value of define "__AVX512CD__" : 1 00:01:23.439 Fetching value of define "__AVX512DQ__" : 1 00:01:23.439 Fetching value of define "__AVX512F__" : 1 00:01:23.439 Fetching value of define "__AVX512VL__" : 1 00:01:23.439 Fetching value of define "__PCLMUL__" : 1 00:01:23.439 Fetching value of define "__RDRND__" : 1 00:01:23.439 Fetching value of define "__RDSEED__" : 1 00:01:23.439 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:23.439 Fetching value of define "__znver1__" : (undefined) 00:01:23.439 Fetching value of define "__znver2__" : (undefined) 00:01:23.439 Fetching value of define "__znver3__" : (undefined) 00:01:23.439 Fetching value of define "__znver4__" : (undefined) 00:01:23.439 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:23.439 Message: lib/log: Defining dependency "log" 00:01:23.439 Message: lib/kvargs: Defining dependency "kvargs" 00:01:23.439 Message: lib/telemetry: Defining dependency "telemetry" 00:01:23.439 Checking for function "getentropy" : NO 00:01:23.439 Message: lib/eal: Defining dependency "eal" 00:01:23.439 Message: lib/ring: Defining dependency "ring" 00:01:23.439 Message: lib/rcu: Defining dependency "rcu" 00:01:23.439 Message: lib/mempool: Defining dependency "mempool" 00:01:23.439 Message: lib/mbuf: Defining dependency "mbuf" 00:01:23.439 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:23.439 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:23.439 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:23.439 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:23.439 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:23.439 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:23.439 Compiler for C supports arguments -mpclmul: YES 00:01:23.439 Compiler for C supports arguments -maes: YES 00:01:23.439 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:23.439 Compiler for C supports arguments -mavx512bw: YES 00:01:23.439 Compiler for C supports arguments -mavx512dq: YES 00:01:23.439 Compiler for C supports arguments -mavx512vl: YES 00:01:23.439 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:23.439 Compiler for C supports arguments -mavx2: YES 00:01:23.439 Compiler for C supports arguments -mavx: YES 00:01:23.439 Message: lib/net: Defining dependency "net" 00:01:23.439 Message: lib/meter: Defining dependency "meter" 00:01:23.439 Message: lib/ethdev: Defining dependency "ethdev" 00:01:23.439 Message: lib/pci: Defining dependency "pci" 00:01:23.439 Message: lib/cmdline: Defining dependency "cmdline" 00:01:23.439 Message: lib/hash: Defining dependency "hash" 00:01:23.439 Message: lib/timer: Defining dependency "timer" 00:01:23.439 Message: lib/compressdev: Defining dependency "compressdev" 00:01:23.439 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:23.439 Message: lib/dmadev: Defining dependency "dmadev" 00:01:23.439 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:23.439 Message: lib/power: Defining dependency "power" 00:01:23.439 Message: lib/reorder: Defining dependency "reorder" 00:01:23.439 Message: lib/security: Defining dependency "security" 00:01:23.439 Has header "linux/userfaultfd.h" : YES 00:01:23.439 Has header "linux/vduse.h" : YES 00:01:23.439 Message: lib/vhost: Defining dependency "vhost" 00:01:23.439 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:23.439 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:23.439 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:23.439 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:23.439 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:23.439 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:23.439 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:23.439 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:23.439 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:23.439 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:23.439 Program doxygen found: YES (/usr/bin/doxygen) 00:01:23.439 Configuring doxy-api-html.conf using configuration 00:01:23.439 Configuring doxy-api-man.conf using configuration 00:01:23.439 Program mandb found: YES (/usr/bin/mandb) 00:01:23.439 Program sphinx-build found: NO 00:01:23.439 Configuring rte_build_config.h using configuration 00:01:23.439 Message: 00:01:23.439 ================= 00:01:23.439 Applications Enabled 00:01:23.439 ================= 00:01:23.439 00:01:23.439 apps: 00:01:23.439 00:01:23.439 00:01:23.439 Message: 00:01:23.439 ================= 00:01:23.439 Libraries Enabled 00:01:23.439 ================= 00:01:23.439 00:01:23.439 libs: 00:01:23.439 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:23.439 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:23.439 cryptodev, dmadev, power, reorder, security, vhost, 00:01:23.439 00:01:23.439 Message: 00:01:23.439 =============== 00:01:23.439 Drivers Enabled 00:01:23.439 =============== 00:01:23.439 00:01:23.439 common: 00:01:23.439 00:01:23.439 bus: 00:01:23.439 pci, vdev, 00:01:23.439 mempool: 00:01:23.439 ring, 00:01:23.439 dma: 00:01:23.439 00:01:23.439 net: 00:01:23.439 00:01:23.439 crypto: 00:01:23.439 00:01:23.439 compress: 00:01:23.439 00:01:23.439 vdpa: 00:01:23.439 00:01:23.439 00:01:23.439 Message: 00:01:23.439 ================= 00:01:23.439 Content Skipped 00:01:23.439 ================= 00:01:23.439 00:01:23.439 apps: 00:01:23.439 dumpcap: explicitly disabled via build config 00:01:23.439 graph: explicitly disabled via build config 00:01:23.439 pdump: explicitly disabled via build config 00:01:23.439 proc-info: explicitly disabled via build config 00:01:23.439 test-acl: explicitly disabled via build config 00:01:23.439 test-bbdev: explicitly disabled via build config 00:01:23.439 test-cmdline: explicitly disabled via build config 00:01:23.439 test-compress-perf: explicitly disabled via build config 00:01:23.439 test-crypto-perf: explicitly disabled via build config 00:01:23.439 test-dma-perf: explicitly disabled via build config 00:01:23.439 test-eventdev: explicitly disabled via build config 00:01:23.439 test-fib: explicitly disabled via build config 00:01:23.439 test-flow-perf: explicitly disabled via build config 00:01:23.439 test-gpudev: explicitly disabled via build config 00:01:23.439 test-mldev: explicitly disabled via build config 00:01:23.439 test-pipeline: explicitly disabled via build config 00:01:23.439 test-pmd: explicitly disabled via build config 00:01:23.439 test-regex: explicitly disabled via build config 00:01:23.439 test-sad: explicitly disabled via build config 00:01:23.439 test-security-perf: explicitly disabled via build config 00:01:23.439 00:01:23.439 libs: 00:01:23.439 metrics: explicitly disabled via build config 00:01:23.439 acl: explicitly disabled via build config 00:01:23.439 bbdev: explicitly disabled via build config 00:01:23.439 bitratestats: explicitly disabled via build config 00:01:23.439 bpf: explicitly disabled via build config 00:01:23.439 cfgfile: explicitly disabled via build config 00:01:23.439 distributor: explicitly disabled via build config 00:01:23.439 efd: explicitly disabled via build config 00:01:23.439 eventdev: explicitly disabled via build config 00:01:23.439 dispatcher: explicitly disabled via build config 00:01:23.439 gpudev: explicitly disabled via build config 00:01:23.439 gro: explicitly disabled via build config 00:01:23.439 gso: explicitly disabled via build config 00:01:23.439 ip_frag: explicitly disabled via build config 00:01:23.439 jobstats: explicitly disabled via build config 00:01:23.439 latencystats: explicitly disabled via build config 00:01:23.439 lpm: explicitly disabled via build config 00:01:23.439 member: explicitly disabled via build config 00:01:23.439 pcapng: explicitly disabled via build config 00:01:23.440 rawdev: explicitly disabled via build config 00:01:23.440 regexdev: explicitly disabled via build config 00:01:23.440 mldev: explicitly disabled via build config 00:01:23.440 rib: explicitly disabled via build config 00:01:23.440 sched: explicitly disabled via build config 00:01:23.440 stack: explicitly disabled via build config 00:01:23.440 ipsec: explicitly disabled via build config 00:01:23.440 pdcp: explicitly disabled via build config 00:01:23.440 fib: explicitly disabled via build config 00:01:23.440 port: explicitly disabled via build config 00:01:23.440 pdump: explicitly disabled via build config 00:01:23.440 table: explicitly disabled via build config 00:01:23.440 pipeline: explicitly disabled via build config 00:01:23.440 graph: explicitly disabled via build config 00:01:23.440 node: explicitly disabled via build config 00:01:23.440 00:01:23.440 drivers: 00:01:23.440 common/cpt: not in enabled drivers build config 00:01:23.440 common/dpaax: not in enabled drivers build config 00:01:23.440 common/iavf: not in enabled drivers build config 00:01:23.440 common/idpf: not in enabled drivers build config 00:01:23.440 common/mvep: not in enabled drivers build config 00:01:23.440 common/octeontx: not in enabled drivers build config 00:01:23.440 bus/auxiliary: not in enabled drivers build config 00:01:23.440 bus/cdx: not in enabled drivers build config 00:01:23.440 bus/dpaa: not in enabled drivers build config 00:01:23.440 bus/fslmc: not in enabled drivers build config 00:01:23.440 bus/ifpga: not in enabled drivers build config 00:01:23.440 bus/platform: not in enabled drivers build config 00:01:23.440 bus/vmbus: not in enabled drivers build config 00:01:23.440 common/cnxk: not in enabled drivers build config 00:01:23.440 common/mlx5: not in enabled drivers build config 00:01:23.440 common/nfp: not in enabled drivers build config 00:01:23.440 common/qat: not in enabled drivers build config 00:01:23.440 common/sfc_efx: not in enabled drivers build config 00:01:23.440 mempool/bucket: not in enabled drivers build config 00:01:23.440 mempool/cnxk: not in enabled drivers build config 00:01:23.440 mempool/dpaa: not in enabled drivers build config 00:01:23.440 mempool/dpaa2: not in enabled drivers build config 00:01:23.440 mempool/octeontx: not in enabled drivers build config 00:01:23.440 mempool/stack: not in enabled drivers build config 00:01:23.440 dma/cnxk: not in enabled drivers build config 00:01:23.440 dma/dpaa: not in enabled drivers build config 00:01:23.440 dma/dpaa2: not in enabled drivers build config 00:01:23.440 dma/hisilicon: not in enabled drivers build config 00:01:23.440 dma/idxd: not in enabled drivers build config 00:01:23.440 dma/ioat: not in enabled drivers build config 00:01:23.440 dma/skeleton: not in enabled drivers build config 00:01:23.440 net/af_packet: not in enabled drivers build config 00:01:23.440 net/af_xdp: not in enabled drivers build config 00:01:23.440 net/ark: not in enabled drivers build config 00:01:23.440 net/atlantic: not in enabled drivers build config 00:01:23.440 net/avp: not in enabled drivers build config 00:01:23.440 net/axgbe: not in enabled drivers build config 00:01:23.440 net/bnx2x: not in enabled drivers build config 00:01:23.440 net/bnxt: not in enabled drivers build config 00:01:23.440 net/bonding: not in enabled drivers build config 00:01:23.440 net/cnxk: not in enabled drivers build config 00:01:23.440 net/cpfl: not in enabled drivers build config 00:01:23.440 net/cxgbe: not in enabled drivers build config 00:01:23.440 net/dpaa: not in enabled drivers build config 00:01:23.440 net/dpaa2: not in enabled drivers build config 00:01:23.440 net/e1000: not in enabled drivers build config 00:01:23.440 net/ena: not in enabled drivers build config 00:01:23.440 net/enetc: not in enabled drivers build config 00:01:23.440 net/enetfec: not in enabled drivers build config 00:01:23.440 net/enic: not in enabled drivers build config 00:01:23.440 net/failsafe: not in enabled drivers build config 00:01:23.440 net/fm10k: not in enabled drivers build config 00:01:23.440 net/gve: not in enabled drivers build config 00:01:23.440 net/hinic: not in enabled drivers build config 00:01:23.440 net/hns3: not in enabled drivers build config 00:01:23.440 net/i40e: not in enabled drivers build config 00:01:23.440 net/iavf: not in enabled drivers build config 00:01:23.440 net/ice: not in enabled drivers build config 00:01:23.440 net/idpf: not in enabled drivers build config 00:01:23.440 net/igc: not in enabled drivers build config 00:01:23.440 net/ionic: not in enabled drivers build config 00:01:23.440 net/ipn3ke: not in enabled drivers build config 00:01:23.440 net/ixgbe: not in enabled drivers build config 00:01:23.440 net/mana: not in enabled drivers build config 00:01:23.440 net/memif: not in enabled drivers build config 00:01:23.440 net/mlx4: not in enabled drivers build config 00:01:23.440 net/mlx5: not in enabled drivers build config 00:01:23.440 net/mvneta: not in enabled drivers build config 00:01:23.440 net/mvpp2: not in enabled drivers build config 00:01:23.440 net/netvsc: not in enabled drivers build config 00:01:23.440 net/nfb: not in enabled drivers build config 00:01:23.440 net/nfp: not in enabled drivers build config 00:01:23.440 net/ngbe: not in enabled drivers build config 00:01:23.440 net/null: not in enabled drivers build config 00:01:23.440 net/octeontx: not in enabled drivers build config 00:01:23.440 net/octeon_ep: not in enabled drivers build config 00:01:23.440 net/pcap: not in enabled drivers build config 00:01:23.440 net/pfe: not in enabled drivers build config 00:01:23.440 net/qede: not in enabled drivers build config 00:01:23.440 net/ring: not in enabled drivers build config 00:01:23.440 net/sfc: not in enabled drivers build config 00:01:23.440 net/softnic: not in enabled drivers build config 00:01:23.440 net/tap: not in enabled drivers build config 00:01:23.440 net/thunderx: not in enabled drivers build config 00:01:23.440 net/txgbe: not in enabled drivers build config 00:01:23.440 net/vdev_netvsc: not in enabled drivers build config 00:01:23.440 net/vhost: not in enabled drivers build config 00:01:23.440 net/virtio: not in enabled drivers build config 00:01:23.440 net/vmxnet3: not in enabled drivers build config 00:01:23.440 raw/*: missing internal dependency, "rawdev" 00:01:23.440 crypto/armv8: not in enabled drivers build config 00:01:23.440 crypto/bcmfs: not in enabled drivers build config 00:01:23.440 crypto/caam_jr: not in enabled drivers build config 00:01:23.440 crypto/ccp: not in enabled drivers build config 00:01:23.440 crypto/cnxk: not in enabled drivers build config 00:01:23.440 crypto/dpaa_sec: not in enabled drivers build config 00:01:23.440 crypto/dpaa2_sec: not in enabled drivers build config 00:01:23.440 crypto/ipsec_mb: not in enabled drivers build config 00:01:23.440 crypto/mlx5: not in enabled drivers build config 00:01:23.440 crypto/mvsam: not in enabled drivers build config 00:01:23.440 crypto/nitrox: not in enabled drivers build config 00:01:23.440 crypto/null: not in enabled drivers build config 00:01:23.440 crypto/octeontx: not in enabled drivers build config 00:01:23.440 crypto/openssl: not in enabled drivers build config 00:01:23.440 crypto/scheduler: not in enabled drivers build config 00:01:23.440 crypto/uadk: not in enabled drivers build config 00:01:23.440 crypto/virtio: not in enabled drivers build config 00:01:23.440 compress/isal: not in enabled drivers build config 00:01:23.440 compress/mlx5: not in enabled drivers build config 00:01:23.440 compress/octeontx: not in enabled drivers build config 00:01:23.440 compress/zlib: not in enabled drivers build config 00:01:23.440 regex/*: missing internal dependency, "regexdev" 00:01:23.440 ml/*: missing internal dependency, "mldev" 00:01:23.440 vdpa/ifc: not in enabled drivers build config 00:01:23.440 vdpa/mlx5: not in enabled drivers build config 00:01:23.440 vdpa/nfp: not in enabled drivers build config 00:01:23.440 vdpa/sfc: not in enabled drivers build config 00:01:23.440 event/*: missing internal dependency, "eventdev" 00:01:23.440 baseband/*: missing internal dependency, "bbdev" 00:01:23.440 gpu/*: missing internal dependency, "gpudev" 00:01:23.440 00:01:23.440 00:01:23.698 Build targets in project: 85 00:01:23.698 00:01:23.698 DPDK 23.11.0 00:01:23.698 00:01:23.698 User defined options 00:01:23.698 buildtype : debug 00:01:23.698 default_library : shared 00:01:23.698 libdir : lib 00:01:23.698 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:23.698 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:23.698 c_link_args : 00:01:23.698 cpu_instruction_set: native 00:01:23.698 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:23.698 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:23.698 enable_docs : false 00:01:23.698 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:23.698 enable_kmods : false 00:01:23.698 tests : false 00:01:23.698 00:01:23.698 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:24.273 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:24.273 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:24.273 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:24.273 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:24.273 [4/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:24.273 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:24.273 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:24.273 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:24.273 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:24.273 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:24.273 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:24.273 [11/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:24.273 [12/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:24.273 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:24.273 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:24.273 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:24.273 [16/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:24.273 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:24.273 [18/265] Linking static target lib/librte_kvargs.a 00:01:24.273 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:24.531 [20/265] Linking static target lib/librte_log.a 00:01:24.531 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:24.531 [22/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:24.531 [23/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:24.531 [24/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:24.531 [25/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:24.531 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:24.531 [27/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:24.531 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:24.531 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:24.531 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:24.531 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:24.531 [32/265] Linking static target lib/librte_pci.a 00:01:24.531 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:24.531 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:24.531 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:24.531 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:24.531 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:24.792 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:24.792 [39/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:24.792 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:24.792 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:24.792 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:24.792 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:24.792 [44/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:24.792 [45/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:24.792 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:24.792 [47/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:24.792 [48/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:24.792 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:24.792 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:24.792 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:24.792 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:24.792 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:24.792 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:24.792 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:24.792 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:24.792 [57/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:24.792 [58/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:24.792 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:24.792 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:24.792 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:24.792 [62/265] Linking static target lib/librte_meter.a 00:01:24.792 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:24.792 [64/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:24.792 [65/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:24.792 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:24.792 [67/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:24.792 [68/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:24.792 [69/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:24.792 [70/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:24.792 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:24.792 [72/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:24.792 [73/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:24.792 [74/265] Linking static target lib/librte_ring.a 00:01:24.792 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:24.792 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:24.792 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:24.792 [78/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:24.792 [79/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:25.049 [80/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:25.049 [81/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.049 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:25.049 [83/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:25.049 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:25.049 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:25.049 [86/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:25.049 [87/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:25.049 [88/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:25.049 [89/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:25.049 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:25.049 [91/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:25.049 [92/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:25.049 [93/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:25.049 [94/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:25.049 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:25.049 [96/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.049 [97/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:25.049 [98/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:25.049 [99/265] Linking static target lib/librte_telemetry.a 00:01:25.049 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:25.049 [101/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:25.049 [102/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:25.049 [103/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:25.049 [104/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:25.049 [105/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:25.049 [106/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:25.049 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:25.049 [108/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:25.049 [109/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:25.049 [110/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:25.049 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:25.049 [112/265] Linking static target lib/librte_net.a 00:01:25.049 [113/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:25.049 [114/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:25.049 [115/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:25.049 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:25.049 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:25.049 [118/265] Linking static target lib/librte_cmdline.a 00:01:25.049 [119/265] Linking static target lib/librte_mempool.a 00:01:25.049 [120/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:25.049 [121/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:25.049 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:25.049 [123/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:25.049 [124/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:25.049 [125/265] Linking static target lib/librte_rcu.a 00:01:25.049 [126/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:25.049 [127/265] Linking static target lib/librte_timer.a 00:01:25.049 [128/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:25.049 [129/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:25.049 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:25.049 [131/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:25.049 [132/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:25.049 [133/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:25.050 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:25.050 [135/265] Linking static target lib/librte_eal.a 00:01:25.050 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:25.050 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:25.050 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:25.050 [139/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.050 [140/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:25.050 [141/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.050 [142/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:25.307 [143/265] Linking static target lib/librte_compressdev.a 00:01:25.307 [144/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:25.307 [145/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.307 [146/265] Linking target lib/librte_log.so.24.0 00:01:25.307 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:25.307 [148/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:25.307 [149/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.307 [150/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:25.307 [151/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:25.307 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:25.307 [153/265] Linking static target lib/librte_mbuf.a 00:01:25.307 [154/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:25.307 [155/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:25.307 [156/265] Linking static target lib/librte_dmadev.a 00:01:25.307 [157/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:25.307 [158/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:25.307 [159/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:25.307 [160/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:25.307 [161/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:25.307 [162/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:25.307 [163/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:25.307 [164/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.307 [165/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:25.307 [166/265] Linking target lib/librte_kvargs.so.24.0 00:01:25.307 [167/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:25.307 [168/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:25.307 [169/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:25.307 [170/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:25.307 [171/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:25.307 [172/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:25.307 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:25.307 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:25.307 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:25.307 [176/265] Linking static target lib/librte_power.a 00:01:25.565 [177/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.565 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:25.565 [179/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:25.565 [180/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:25.565 [181/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.565 [182/265] Linking static target lib/librte_hash.a 00:01:25.565 [183/265] Linking static target lib/librte_security.a 00:01:25.565 [184/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:25.565 [185/265] Linking target lib/librte_telemetry.so.24.0 00:01:25.565 [186/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:25.565 [187/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:25.565 [188/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:25.565 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:25.565 [190/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:25.565 [191/265] Linking static target lib/librte_reorder.a 00:01:25.565 [192/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:25.565 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:25.565 [194/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:25.565 [195/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:25.565 [196/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:25.565 [197/265] Linking static target drivers/librte_bus_vdev.a 00:01:25.565 [198/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:25.565 [199/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:25.565 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:25.823 [201/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:25.823 [202/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:25.823 [203/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:25.823 [204/265] Linking static target lib/librte_cryptodev.a 00:01:25.823 [205/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:25.823 [206/265] Linking static target drivers/librte_bus_pci.a 00:01:25.823 [207/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.823 [208/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.823 [209/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:25.823 [210/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.823 [211/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:25.823 [212/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:25.823 [213/265] Linking static target drivers/librte_mempool_ring.a 00:01:26.080 [214/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.080 [215/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.081 [216/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.081 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.081 [218/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:26.081 [219/265] Linking static target lib/librte_ethdev.a 00:01:26.081 [220/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.081 [221/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:26.338 [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.338 [223/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.338 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.271 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:27.271 [226/265] Linking static target lib/librte_vhost.a 00:01:27.529 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.900 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.156 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.721 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.979 [231/265] Linking target lib/librte_eal.so.24.0 00:01:34.979 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:34.979 [233/265] Linking target lib/librte_pci.so.24.0 00:01:34.979 [234/265] Linking target lib/librte_timer.so.24.0 00:01:34.979 [235/265] Linking target lib/librte_ring.so.24.0 00:01:34.979 [236/265] Linking target lib/librte_meter.so.24.0 00:01:34.979 [237/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:34.979 [238/265] Linking target lib/librte_dmadev.so.24.0 00:01:35.237 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:35.237 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:35.237 [241/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:35.237 [242/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:35.237 [243/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:35.237 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:35.237 [245/265] Linking target lib/librte_mempool.so.24.0 00:01:35.237 [246/265] Linking target lib/librte_rcu.so.24.0 00:01:35.237 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:35.237 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:35.496 [249/265] Linking target lib/librte_mbuf.so.24.0 00:01:35.496 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:35.496 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:35.496 [252/265] Linking target lib/librte_compressdev.so.24.0 00:01:35.496 [253/265] Linking target lib/librte_reorder.so.24.0 00:01:35.496 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:01:35.496 [255/265] Linking target lib/librte_net.so.24.0 00:01:35.754 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:35.754 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:35.754 [258/265] Linking target lib/librte_hash.so.24.0 00:01:35.754 [259/265] Linking target lib/librte_security.so.24.0 00:01:35.754 [260/265] Linking target lib/librte_cmdline.so.24.0 00:01:35.754 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:35.754 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:36.011 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:36.011 [264/265] Linking target lib/librte_vhost.so.24.0 00:01:36.011 [265/265] Linking target lib/librte_power.so.24.0 00:01:36.011 INFO: autodetecting backend as ninja 00:01:36.011 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:36.967 CC lib/ut/ut.o 00:01:36.967 CC lib/ut_mock/mock.o 00:01:36.967 CC lib/log/log.o 00:01:36.967 CC lib/log/log_deprecated.o 00:01:36.967 CC lib/log/log_flags.o 00:01:36.967 LIB libspdk_ut_mock.a 00:01:36.967 LIB libspdk_ut.a 00:01:36.967 LIB libspdk_log.a 00:01:36.967 SO libspdk_ut.so.2.0 00:01:36.967 SO libspdk_ut_mock.so.6.0 00:01:37.224 SO libspdk_log.so.7.0 00:01:37.224 SYMLINK libspdk_ut.so 00:01:37.224 SYMLINK libspdk_ut_mock.so 00:01:37.224 SYMLINK libspdk_log.so 00:01:37.481 CXX lib/trace_parser/trace.o 00:01:37.481 CC lib/ioat/ioat.o 00:01:37.481 CC lib/util/base64.o 00:01:37.481 CC lib/util/bit_array.o 00:01:37.481 CC lib/dma/dma.o 00:01:37.481 CC lib/util/cpuset.o 00:01:37.481 CC lib/util/crc16.o 00:01:37.481 CC lib/util/crc32.o 00:01:37.481 CC lib/util/crc64.o 00:01:37.481 CC lib/util/crc32c.o 00:01:37.481 CC lib/util/crc32_ieee.o 00:01:37.481 CC lib/util/fd.o 00:01:37.481 CC lib/util/dif.o 00:01:37.481 CC lib/util/file.o 00:01:37.481 CC lib/util/hexlify.o 00:01:37.481 CC lib/util/iov.o 00:01:37.481 CC lib/util/math.o 00:01:37.481 CC lib/util/pipe.o 00:01:37.481 CC lib/util/strerror_tls.o 00:01:37.481 CC lib/util/string.o 00:01:37.481 CC lib/util/uuid.o 00:01:37.481 CC lib/util/fd_group.o 00:01:37.481 CC lib/util/xor.o 00:01:37.481 CC lib/util/zipf.o 00:01:37.738 CC lib/vfio_user/host/vfio_user_pci.o 00:01:37.738 CC lib/vfio_user/host/vfio_user.o 00:01:37.738 LIB libspdk_dma.a 00:01:37.738 LIB libspdk_ioat.a 00:01:37.738 SO libspdk_dma.so.4.0 00:01:37.738 SO libspdk_ioat.so.7.0 00:01:37.738 SYMLINK libspdk_dma.so 00:01:37.738 SYMLINK libspdk_ioat.so 00:01:37.738 LIB libspdk_vfio_user.a 00:01:37.738 SO libspdk_vfio_user.so.5.0 00:01:37.997 LIB libspdk_util.a 00:01:37.997 SYMLINK libspdk_vfio_user.so 00:01:37.997 SO libspdk_util.so.9.0 00:01:37.997 LIB libspdk_trace_parser.a 00:01:37.997 SYMLINK libspdk_util.so 00:01:37.997 SO libspdk_trace_parser.so.5.0 00:01:38.255 SYMLINK libspdk_trace_parser.so 00:01:38.255 CC lib/env_dpdk/memory.o 00:01:38.255 CC lib/env_dpdk/env.o 00:01:38.255 CC lib/env_dpdk/pci.o 00:01:38.255 CC lib/env_dpdk/pci_ioat.o 00:01:38.255 CC lib/env_dpdk/init.o 00:01:38.255 CC lib/env_dpdk/threads.o 00:01:38.255 CC lib/env_dpdk/pci_virtio.o 00:01:38.255 CC lib/env_dpdk/pci_vmd.o 00:01:38.255 CC lib/env_dpdk/pci_idxd.o 00:01:38.255 CC lib/env_dpdk/pci_event.o 00:01:38.255 CC lib/env_dpdk/sigbus_handler.o 00:01:38.255 CC lib/env_dpdk/pci_dpdk.o 00:01:38.255 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:38.255 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:38.255 CC lib/vmd/led.o 00:01:38.255 CC lib/vmd/vmd.o 00:01:38.255 CC lib/json/json_parse.o 00:01:38.255 CC lib/rdma/common.o 00:01:38.255 CC lib/json/json_write.o 00:01:38.513 CC lib/json/json_util.o 00:01:38.513 CC lib/rdma/rdma_verbs.o 00:01:38.513 CC lib/conf/conf.o 00:01:38.513 CC lib/idxd/idxd.o 00:01:38.513 CC lib/idxd/idxd_user.o 00:01:38.513 LIB libspdk_conf.a 00:01:38.513 SO libspdk_conf.so.6.0 00:01:38.770 LIB libspdk_rdma.a 00:01:38.770 LIB libspdk_json.a 00:01:38.770 SO libspdk_rdma.so.6.0 00:01:38.770 SYMLINK libspdk_conf.so 00:01:38.770 SO libspdk_json.so.6.0 00:01:38.770 SYMLINK libspdk_rdma.so 00:01:38.770 SYMLINK libspdk_json.so 00:01:38.770 LIB libspdk_idxd.a 00:01:38.770 SO libspdk_idxd.so.12.0 00:01:38.770 LIB libspdk_vmd.a 00:01:39.027 SO libspdk_vmd.so.6.0 00:01:39.027 SYMLINK libspdk_idxd.so 00:01:39.027 SYMLINK libspdk_vmd.so 00:01:39.027 CC lib/jsonrpc/jsonrpc_server.o 00:01:39.027 CC lib/jsonrpc/jsonrpc_client.o 00:01:39.027 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:39.027 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:39.284 LIB libspdk_jsonrpc.a 00:01:39.284 SO libspdk_jsonrpc.so.6.0 00:01:39.284 LIB libspdk_env_dpdk.a 00:01:39.284 SYMLINK libspdk_jsonrpc.so 00:01:39.284 SO libspdk_env_dpdk.so.14.0 00:01:39.542 SYMLINK libspdk_env_dpdk.so 00:01:39.800 CC lib/rpc/rpc.o 00:01:39.800 LIB libspdk_rpc.a 00:01:39.800 SO libspdk_rpc.so.6.0 00:01:39.800 SYMLINK libspdk_rpc.so 00:01:40.364 CC lib/notify/notify.o 00:01:40.364 CC lib/notify/notify_rpc.o 00:01:40.364 CC lib/keyring/keyring.o 00:01:40.364 CC lib/keyring/keyring_rpc.o 00:01:40.364 CC lib/trace/trace.o 00:01:40.364 CC lib/trace/trace_flags.o 00:01:40.364 CC lib/trace/trace_rpc.o 00:01:40.364 LIB libspdk_notify.a 00:01:40.364 SO libspdk_notify.so.6.0 00:01:40.364 LIB libspdk_keyring.a 00:01:40.364 LIB libspdk_trace.a 00:01:40.364 SYMLINK libspdk_notify.so 00:01:40.364 SO libspdk_keyring.so.1.0 00:01:40.364 SO libspdk_trace.so.10.0 00:01:40.621 SYMLINK libspdk_keyring.so 00:01:40.621 SYMLINK libspdk_trace.so 00:01:40.879 CC lib/sock/sock.o 00:01:40.879 CC lib/sock/sock_rpc.o 00:01:40.879 CC lib/thread/iobuf.o 00:01:40.879 CC lib/thread/thread.o 00:01:41.136 LIB libspdk_sock.a 00:01:41.136 SO libspdk_sock.so.9.0 00:01:41.136 SYMLINK libspdk_sock.so 00:01:41.394 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:41.394 CC lib/nvme/nvme_ctrlr.o 00:01:41.394 CC lib/nvme/nvme_fabric.o 00:01:41.394 CC lib/nvme/nvme_ns_cmd.o 00:01:41.394 CC lib/nvme/nvme_ns.o 00:01:41.394 CC lib/nvme/nvme_pcie_common.o 00:01:41.394 CC lib/nvme/nvme_pcie.o 00:01:41.394 CC lib/nvme/nvme.o 00:01:41.394 CC lib/nvme/nvme_qpair.o 00:01:41.394 CC lib/nvme/nvme_quirks.o 00:01:41.394 CC lib/nvme/nvme_transport.o 00:01:41.394 CC lib/nvme/nvme_discovery.o 00:01:41.394 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:41.394 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:41.394 CC lib/nvme/nvme_opal.o 00:01:41.394 CC lib/nvme/nvme_tcp.o 00:01:41.394 CC lib/nvme/nvme_io_msg.o 00:01:41.394 CC lib/nvme/nvme_poll_group.o 00:01:41.394 CC lib/nvme/nvme_zns.o 00:01:41.394 CC lib/nvme/nvme_auth.o 00:01:41.652 CC lib/nvme/nvme_stubs.o 00:01:41.652 CC lib/nvme/nvme_cuse.o 00:01:41.652 CC lib/nvme/nvme_rdma.o 00:01:41.909 LIB libspdk_thread.a 00:01:41.909 SO libspdk_thread.so.10.0 00:01:41.909 SYMLINK libspdk_thread.so 00:01:42.165 CC lib/accel/accel_rpc.o 00:01:42.166 CC lib/accel/accel.o 00:01:42.166 CC lib/accel/accel_sw.o 00:01:42.166 CC lib/blob/request.o 00:01:42.166 CC lib/blob/blobstore.o 00:01:42.166 CC lib/blob/zeroes.o 00:01:42.166 CC lib/blob/blob_bs_dev.o 00:01:42.166 CC lib/init/json_config.o 00:01:42.166 CC lib/init/subsystem.o 00:01:42.166 CC lib/init/subsystem_rpc.o 00:01:42.166 CC lib/init/rpc.o 00:01:42.166 CC lib/virtio/virtio.o 00:01:42.166 CC lib/virtio/virtio_vhost_user.o 00:01:42.166 CC lib/virtio/virtio_vfio_user.o 00:01:42.166 CC lib/virtio/virtio_pci.o 00:01:42.424 LIB libspdk_init.a 00:01:42.424 SO libspdk_init.so.5.0 00:01:42.424 LIB libspdk_virtio.a 00:01:42.682 SYMLINK libspdk_init.so 00:01:42.682 SO libspdk_virtio.so.7.0 00:01:42.682 SYMLINK libspdk_virtio.so 00:01:42.939 CC lib/event/app.o 00:01:42.939 CC lib/event/app_rpc.o 00:01:42.939 CC lib/event/reactor.o 00:01:42.939 CC lib/event/log_rpc.o 00:01:42.939 CC lib/event/scheduler_static.o 00:01:42.939 LIB libspdk_accel.a 00:01:42.939 SO libspdk_accel.so.15.0 00:01:43.196 LIB libspdk_nvme.a 00:01:43.196 SYMLINK libspdk_accel.so 00:01:43.196 LIB libspdk_event.a 00:01:43.196 SO libspdk_nvme.so.13.0 00:01:43.196 SO libspdk_event.so.13.0 00:01:43.196 SYMLINK libspdk_event.so 00:01:43.454 CC lib/bdev/bdev.o 00:01:43.454 CC lib/bdev/bdev_zone.o 00:01:43.454 CC lib/bdev/bdev_rpc.o 00:01:43.454 CC lib/bdev/scsi_nvme.o 00:01:43.454 CC lib/bdev/part.o 00:01:43.454 SYMLINK libspdk_nvme.so 00:01:44.388 LIB libspdk_blob.a 00:01:44.388 SO libspdk_blob.so.11.0 00:01:44.388 SYMLINK libspdk_blob.so 00:01:44.646 CC lib/lvol/lvol.o 00:01:44.646 CC lib/blobfs/blobfs.o 00:01:44.646 CC lib/blobfs/tree.o 00:01:45.212 LIB libspdk_bdev.a 00:01:45.212 SO libspdk_bdev.so.15.0 00:01:45.212 LIB libspdk_blobfs.a 00:01:45.212 LIB libspdk_lvol.a 00:01:45.212 SO libspdk_blobfs.so.10.0 00:01:45.212 SO libspdk_lvol.so.10.0 00:01:45.212 SYMLINK libspdk_bdev.so 00:01:45.212 SYMLINK libspdk_blobfs.so 00:01:45.212 SYMLINK libspdk_lvol.so 00:01:45.470 CC lib/ftl/ftl_core.o 00:01:45.470 CC lib/ftl/ftl_init.o 00:01:45.470 CC lib/ftl/ftl_layout.o 00:01:45.470 CC lib/ftl/ftl_debug.o 00:01:45.470 CC lib/ftl/ftl_io.o 00:01:45.470 CC lib/scsi/dev.o 00:01:45.470 CC lib/ftl/ftl_sb.o 00:01:45.470 CC lib/scsi/lun.o 00:01:45.470 CC lib/ftl/ftl_l2p.o 00:01:45.470 CC lib/scsi/port.o 00:01:45.470 CC lib/ftl/ftl_l2p_flat.o 00:01:45.470 CC lib/scsi/scsi.o 00:01:45.470 CC lib/ftl/ftl_nv_cache.o 00:01:45.470 CC lib/scsi/scsi_bdev.o 00:01:45.470 CC lib/ftl/ftl_band.o 00:01:45.470 CC lib/scsi/scsi_rpc.o 00:01:45.470 CC lib/ftl/ftl_band_ops.o 00:01:45.470 CC lib/scsi/scsi_pr.o 00:01:45.470 CC lib/ftl/ftl_writer.o 00:01:45.470 CC lib/ftl/ftl_rq.o 00:01:45.470 CC lib/ftl/ftl_reloc.o 00:01:45.470 CC lib/scsi/task.o 00:01:45.470 CC lib/ftl/ftl_l2p_cache.o 00:01:45.470 CC lib/ftl/ftl_p2l.o 00:01:45.470 CC lib/ftl/mngt/ftl_mngt.o 00:01:45.470 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:45.470 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:45.470 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:45.470 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:45.470 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:45.470 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:45.470 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:45.470 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:45.470 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:45.470 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:45.470 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:45.470 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:45.470 CC lib/ftl/utils/ftl_conf.o 00:01:45.470 CC lib/nvmf/ctrlr.o 00:01:45.470 CC lib/nvmf/ctrlr_discovery.o 00:01:45.470 CC lib/nbd/nbd_rpc.o 00:01:45.470 CC lib/nbd/nbd.o 00:01:45.470 CC lib/ftl/utils/ftl_md.o 00:01:45.470 CC lib/nvmf/ctrlr_bdev.o 00:01:45.470 CC lib/nvmf/subsystem.o 00:01:45.470 CC lib/nvmf/nvmf.o 00:01:45.470 CC lib/ftl/utils/ftl_mempool.o 00:01:45.470 CC lib/ftl/utils/ftl_bitmap.o 00:01:45.470 CC lib/nvmf/nvmf_rpc.o 00:01:45.470 CC lib/nvmf/transport.o 00:01:45.470 CC lib/ftl/utils/ftl_property.o 00:01:45.470 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:45.470 CC lib/nvmf/tcp.o 00:01:45.470 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:45.470 CC lib/nvmf/rdma.o 00:01:45.470 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:45.470 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:45.470 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:45.470 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:45.470 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:45.470 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:45.470 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:45.470 CC lib/ublk/ublk.o 00:01:45.470 CC lib/ublk/ublk_rpc.o 00:01:45.470 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:45.470 CC lib/ftl/base/ftl_base_dev.o 00:01:45.470 CC lib/ftl/ftl_trace.o 00:01:45.470 CC lib/ftl/base/ftl_base_bdev.o 00:01:46.037 LIB libspdk_scsi.a 00:01:46.037 SO libspdk_scsi.so.9.0 00:01:46.037 LIB libspdk_nbd.a 00:01:46.296 SO libspdk_nbd.so.7.0 00:01:46.296 SYMLINK libspdk_scsi.so 00:01:46.296 SYMLINK libspdk_nbd.so 00:01:46.296 LIB libspdk_ublk.a 00:01:46.296 SO libspdk_ublk.so.3.0 00:01:46.296 SYMLINK libspdk_ublk.so 00:01:46.554 LIB libspdk_ftl.a 00:01:46.554 CC lib/iscsi/init_grp.o 00:01:46.554 CC lib/iscsi/conn.o 00:01:46.554 CC lib/vhost/vhost_rpc.o 00:01:46.554 CC lib/vhost/vhost.o 00:01:46.554 CC lib/iscsi/iscsi.o 00:01:46.554 CC lib/iscsi/md5.o 00:01:46.554 CC lib/vhost/vhost_scsi.o 00:01:46.554 CC lib/vhost/rte_vhost_user.o 00:01:46.554 CC lib/iscsi/param.o 00:01:46.554 CC lib/vhost/vhost_blk.o 00:01:46.554 CC lib/iscsi/portal_grp.o 00:01:46.554 CC lib/iscsi/iscsi_rpc.o 00:01:46.554 CC lib/iscsi/tgt_node.o 00:01:46.554 CC lib/iscsi/iscsi_subsystem.o 00:01:46.554 CC lib/iscsi/task.o 00:01:46.554 SO libspdk_ftl.so.9.0 00:01:46.813 SYMLINK libspdk_ftl.so 00:01:47.380 LIB libspdk_nvmf.a 00:01:47.380 LIB libspdk_vhost.a 00:01:47.380 SO libspdk_nvmf.so.18.0 00:01:47.380 SO libspdk_vhost.so.8.0 00:01:47.380 SYMLINK libspdk_vhost.so 00:01:47.380 SYMLINK libspdk_nvmf.so 00:01:47.380 LIB libspdk_iscsi.a 00:01:47.380 SO libspdk_iscsi.so.8.0 00:01:47.639 SYMLINK libspdk_iscsi.so 00:01:48.205 CC module/env_dpdk/env_dpdk_rpc.o 00:01:48.205 CC module/accel/iaa/accel_iaa.o 00:01:48.205 CC module/accel/iaa/accel_iaa_rpc.o 00:01:48.205 CC module/blob/bdev/blob_bdev.o 00:01:48.205 CC module/keyring/file/keyring.o 00:01:48.205 CC module/keyring/file/keyring_rpc.o 00:01:48.205 CC module/accel/dsa/accel_dsa_rpc.o 00:01:48.205 CC module/accel/dsa/accel_dsa.o 00:01:48.205 LIB libspdk_env_dpdk_rpc.a 00:01:48.205 CC module/accel/error/accel_error.o 00:01:48.205 CC module/accel/error/accel_error_rpc.o 00:01:48.205 CC module/accel/ioat/accel_ioat.o 00:01:48.205 CC module/accel/ioat/accel_ioat_rpc.o 00:01:48.205 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:48.205 CC module/sock/posix/posix.o 00:01:48.205 CC module/scheduler/gscheduler/gscheduler.o 00:01:48.205 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:48.205 SO libspdk_env_dpdk_rpc.so.6.0 00:01:48.205 SYMLINK libspdk_env_dpdk_rpc.so 00:01:48.205 LIB libspdk_accel_iaa.a 00:01:48.205 LIB libspdk_keyring_file.a 00:01:48.465 LIB libspdk_scheduler_gscheduler.a 00:01:48.465 LIB libspdk_accel_error.a 00:01:48.465 LIB libspdk_scheduler_dpdk_governor.a 00:01:48.465 SO libspdk_keyring_file.so.1.0 00:01:48.465 SO libspdk_accel_iaa.so.3.0 00:01:48.465 LIB libspdk_accel_ioat.a 00:01:48.465 LIB libspdk_scheduler_dynamic.a 00:01:48.465 SO libspdk_accel_error.so.2.0 00:01:48.465 SO libspdk_scheduler_gscheduler.so.4.0 00:01:48.465 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:48.465 LIB libspdk_accel_dsa.a 00:01:48.465 LIB libspdk_blob_bdev.a 00:01:48.465 SO libspdk_scheduler_dynamic.so.4.0 00:01:48.465 SO libspdk_accel_ioat.so.6.0 00:01:48.465 SYMLINK libspdk_keyring_file.so 00:01:48.465 SYMLINK libspdk_accel_iaa.so 00:01:48.465 SO libspdk_blob_bdev.so.11.0 00:01:48.465 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:48.465 SO libspdk_accel_dsa.so.5.0 00:01:48.465 SYMLINK libspdk_scheduler_gscheduler.so 00:01:48.465 SYMLINK libspdk_accel_error.so 00:01:48.465 SYMLINK libspdk_scheduler_dynamic.so 00:01:48.465 SYMLINK libspdk_blob_bdev.so 00:01:48.465 SYMLINK libspdk_accel_ioat.so 00:01:48.465 SYMLINK libspdk_accel_dsa.so 00:01:48.724 LIB libspdk_sock_posix.a 00:01:48.724 SO libspdk_sock_posix.so.6.0 00:01:48.982 SYMLINK libspdk_sock_posix.so 00:01:48.982 CC module/bdev/passthru/vbdev_passthru.o 00:01:48.982 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:48.982 CC module/bdev/lvol/vbdev_lvol.o 00:01:48.982 CC module/bdev/nvme/bdev_nvme.o 00:01:48.982 CC module/bdev/nvme/nvme_rpc.o 00:01:48.982 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:48.982 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:48.982 CC module/bdev/nvme/bdev_mdns_client.o 00:01:48.982 CC module/bdev/nvme/vbdev_opal.o 00:01:48.982 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:48.982 CC module/bdev/delay/vbdev_delay.o 00:01:48.982 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:48.982 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:48.982 CC module/blobfs/bdev/blobfs_bdev.o 00:01:48.982 CC module/bdev/gpt/gpt.o 00:01:48.982 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:48.982 CC module/bdev/raid/bdev_raid.o 00:01:48.982 CC module/bdev/raid/bdev_raid_rpc.o 00:01:48.982 CC module/bdev/gpt/vbdev_gpt.o 00:01:48.982 CC module/bdev/iscsi/bdev_iscsi.o 00:01:48.982 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:48.982 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:48.982 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:48.982 CC module/bdev/raid/raid0.o 00:01:48.982 CC module/bdev/raid/bdev_raid_sb.o 00:01:48.982 CC module/bdev/raid/raid1.o 00:01:48.982 CC module/bdev/raid/concat.o 00:01:48.982 CC module/bdev/split/vbdev_split.o 00:01:48.982 CC module/bdev/split/vbdev_split_rpc.o 00:01:48.982 CC module/bdev/malloc/bdev_malloc.o 00:01:48.982 CC module/bdev/error/vbdev_error.o 00:01:48.982 CC module/bdev/ftl/bdev_ftl.o 00:01:48.982 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:48.982 CC module/bdev/error/vbdev_error_rpc.o 00:01:48.982 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:48.982 CC module/bdev/null/bdev_null.o 00:01:48.982 CC module/bdev/null/bdev_null_rpc.o 00:01:48.982 CC module/bdev/aio/bdev_aio.o 00:01:48.982 CC module/bdev/aio/bdev_aio_rpc.o 00:01:48.982 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:48.982 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:48.982 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:49.240 LIB libspdk_blobfs_bdev.a 00:01:49.240 LIB libspdk_bdev_split.a 00:01:49.240 SO libspdk_blobfs_bdev.so.6.0 00:01:49.240 SO libspdk_bdev_split.so.6.0 00:01:49.240 LIB libspdk_bdev_null.a 00:01:49.240 LIB libspdk_bdev_error.a 00:01:49.240 LIB libspdk_bdev_gpt.a 00:01:49.240 LIB libspdk_bdev_passthru.a 00:01:49.240 LIB libspdk_bdev_ftl.a 00:01:49.240 SYMLINK libspdk_blobfs_bdev.so 00:01:49.240 SO libspdk_bdev_null.so.6.0 00:01:49.240 LIB libspdk_bdev_zone_block.a 00:01:49.240 SO libspdk_bdev_passthru.so.6.0 00:01:49.240 SO libspdk_bdev_error.so.6.0 00:01:49.240 SO libspdk_bdev_gpt.so.6.0 00:01:49.240 SYMLINK libspdk_bdev_split.so 00:01:49.240 SO libspdk_bdev_ftl.so.6.0 00:01:49.240 SO libspdk_bdev_zone_block.so.6.0 00:01:49.240 LIB libspdk_bdev_aio.a 00:01:49.240 LIB libspdk_bdev_malloc.a 00:01:49.240 LIB libspdk_bdev_iscsi.a 00:01:49.240 SYMLINK libspdk_bdev_null.so 00:01:49.240 LIB libspdk_bdev_delay.a 00:01:49.240 SO libspdk_bdev_aio.so.6.0 00:01:49.240 SYMLINK libspdk_bdev_gpt.so 00:01:49.240 SYMLINK libspdk_bdev_error.so 00:01:49.240 SYMLINK libspdk_bdev_passthru.so 00:01:49.240 SYMLINK libspdk_bdev_ftl.so 00:01:49.240 SO libspdk_bdev_iscsi.so.6.0 00:01:49.240 SYMLINK libspdk_bdev_zone_block.so 00:01:49.240 SO libspdk_bdev_malloc.so.6.0 00:01:49.240 SO libspdk_bdev_delay.so.6.0 00:01:49.240 LIB libspdk_bdev_lvol.a 00:01:49.497 SYMLINK libspdk_bdev_aio.so 00:01:49.497 SYMLINK libspdk_bdev_iscsi.so 00:01:49.497 SO libspdk_bdev_lvol.so.6.0 00:01:49.497 SYMLINK libspdk_bdev_malloc.so 00:01:49.497 LIB libspdk_bdev_virtio.a 00:01:49.497 SYMLINK libspdk_bdev_delay.so 00:01:49.497 SO libspdk_bdev_virtio.so.6.0 00:01:49.497 SYMLINK libspdk_bdev_lvol.so 00:01:49.497 SYMLINK libspdk_bdev_virtio.so 00:01:49.771 LIB libspdk_bdev_raid.a 00:01:49.771 SO libspdk_bdev_raid.so.6.0 00:01:49.771 SYMLINK libspdk_bdev_raid.so 00:01:50.766 LIB libspdk_bdev_nvme.a 00:01:50.766 SO libspdk_bdev_nvme.so.7.0 00:01:50.766 SYMLINK libspdk_bdev_nvme.so 00:01:51.332 CC module/event/subsystems/sock/sock.o 00:01:51.332 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:51.332 CC module/event/subsystems/vmd/vmd.o 00:01:51.332 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:51.332 CC module/event/subsystems/scheduler/scheduler.o 00:01:51.332 CC module/event/subsystems/iobuf/iobuf.o 00:01:51.332 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:51.332 CC module/event/subsystems/keyring/keyring.o 00:01:51.332 LIB libspdk_event_sock.a 00:01:51.332 LIB libspdk_event_vhost_blk.a 00:01:51.332 LIB libspdk_event_vmd.a 00:01:51.332 SO libspdk_event_sock.so.5.0 00:01:51.332 LIB libspdk_event_scheduler.a 00:01:51.332 LIB libspdk_event_keyring.a 00:01:51.332 LIB libspdk_event_iobuf.a 00:01:51.332 SO libspdk_event_vhost_blk.so.3.0 00:01:51.332 SO libspdk_event_vmd.so.6.0 00:01:51.332 SO libspdk_event_scheduler.so.4.0 00:01:51.332 SYMLINK libspdk_event_sock.so 00:01:51.332 SO libspdk_event_keyring.so.1.0 00:01:51.590 SO libspdk_event_iobuf.so.3.0 00:01:51.590 SYMLINK libspdk_event_vhost_blk.so 00:01:51.590 SYMLINK libspdk_event_scheduler.so 00:01:51.590 SYMLINK libspdk_event_vmd.so 00:01:51.590 SYMLINK libspdk_event_keyring.so 00:01:51.590 SYMLINK libspdk_event_iobuf.so 00:01:51.849 CC module/event/subsystems/accel/accel.o 00:01:51.849 LIB libspdk_event_accel.a 00:01:51.849 SO libspdk_event_accel.so.6.0 00:01:52.108 SYMLINK libspdk_event_accel.so 00:01:52.367 CC module/event/subsystems/bdev/bdev.o 00:01:52.367 LIB libspdk_event_bdev.a 00:01:52.625 SO libspdk_event_bdev.so.6.0 00:01:52.625 SYMLINK libspdk_event_bdev.so 00:01:52.884 CC module/event/subsystems/nbd/nbd.o 00:01:52.884 CC module/event/subsystems/scsi/scsi.o 00:01:52.884 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:52.884 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:52.884 CC module/event/subsystems/ublk/ublk.o 00:01:52.884 LIB libspdk_event_nbd.a 00:01:52.884 LIB libspdk_event_scsi.a 00:01:53.142 SO libspdk_event_nbd.so.6.0 00:01:53.142 LIB libspdk_event_ublk.a 00:01:53.142 SO libspdk_event_scsi.so.6.0 00:01:53.142 SO libspdk_event_ublk.so.3.0 00:01:53.142 SYMLINK libspdk_event_nbd.so 00:01:53.142 LIB libspdk_event_nvmf.a 00:01:53.142 SYMLINK libspdk_event_scsi.so 00:01:53.142 SYMLINK libspdk_event_ublk.so 00:01:53.142 SO libspdk_event_nvmf.so.6.0 00:01:53.142 SYMLINK libspdk_event_nvmf.so 00:01:53.401 CC module/event/subsystems/iscsi/iscsi.o 00:01:53.401 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:53.401 LIB libspdk_event_vhost_scsi.a 00:01:53.401 LIB libspdk_event_iscsi.a 00:01:53.659 SO libspdk_event_vhost_scsi.so.3.0 00:01:53.659 SO libspdk_event_iscsi.so.6.0 00:01:53.659 SYMLINK libspdk_event_vhost_scsi.so 00:01:53.659 SYMLINK libspdk_event_iscsi.so 00:01:53.659 SO libspdk.so.6.0 00:01:53.659 SYMLINK libspdk.so 00:01:53.917 CC app/trace_record/trace_record.o 00:01:53.917 CXX app/trace/trace.o 00:01:53.917 CC app/spdk_nvme_identify/identify.o 00:01:54.180 CC app/spdk_top/spdk_top.o 00:01:54.180 CC app/spdk_lspci/spdk_lspci.o 00:01:54.180 CC app/spdk_nvme_perf/perf.o 00:01:54.180 TEST_HEADER include/spdk/accel.h 00:01:54.180 TEST_HEADER include/spdk/accel_module.h 00:01:54.180 TEST_HEADER include/spdk/assert.h 00:01:54.180 TEST_HEADER include/spdk/bdev.h 00:01:54.180 TEST_HEADER include/spdk/base64.h 00:01:54.180 TEST_HEADER include/spdk/barrier.h 00:01:54.180 TEST_HEADER include/spdk/bdev_module.h 00:01:54.180 TEST_HEADER include/spdk/bdev_zone.h 00:01:54.180 TEST_HEADER include/spdk/bit_pool.h 00:01:54.181 TEST_HEADER include/spdk/bit_array.h 00:01:54.181 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:54.181 TEST_HEADER include/spdk/blob_bdev.h 00:01:54.181 TEST_HEADER include/spdk/blobfs.h 00:01:54.181 TEST_HEADER include/spdk/conf.h 00:01:54.181 CC test/rpc_client/rpc_client_test.o 00:01:54.181 TEST_HEADER include/spdk/blob.h 00:01:54.181 TEST_HEADER include/spdk/config.h 00:01:54.181 TEST_HEADER include/spdk/cpuset.h 00:01:54.181 TEST_HEADER include/spdk/crc16.h 00:01:54.181 TEST_HEADER include/spdk/crc32.h 00:01:54.181 TEST_HEADER include/spdk/dif.h 00:01:54.181 TEST_HEADER include/spdk/crc64.h 00:01:54.181 TEST_HEADER include/spdk/dma.h 00:01:54.181 TEST_HEADER include/spdk/endian.h 00:01:54.181 TEST_HEADER include/spdk/env_dpdk.h 00:01:54.181 TEST_HEADER include/spdk/env.h 00:01:54.181 TEST_HEADER include/spdk/event.h 00:01:54.181 TEST_HEADER include/spdk/fd_group.h 00:01:54.181 TEST_HEADER include/spdk/fd.h 00:01:54.181 CC app/spdk_nvme_discover/discovery_aer.o 00:01:54.181 TEST_HEADER include/spdk/file.h 00:01:54.181 TEST_HEADER include/spdk/ftl.h 00:01:54.181 TEST_HEADER include/spdk/gpt_spec.h 00:01:54.181 CC app/iscsi_tgt/iscsi_tgt.o 00:01:54.181 TEST_HEADER include/spdk/hexlify.h 00:01:54.181 TEST_HEADER include/spdk/idxd.h 00:01:54.181 TEST_HEADER include/spdk/histogram_data.h 00:01:54.181 TEST_HEADER include/spdk/idxd_spec.h 00:01:54.181 TEST_HEADER include/spdk/init.h 00:01:54.181 CC app/spdk_dd/spdk_dd.o 00:01:54.181 TEST_HEADER include/spdk/ioat.h 00:01:54.181 TEST_HEADER include/spdk/iscsi_spec.h 00:01:54.181 TEST_HEADER include/spdk/ioat_spec.h 00:01:54.181 TEST_HEADER include/spdk/json.h 00:01:54.181 CC app/nvmf_tgt/nvmf_main.o 00:01:54.181 TEST_HEADER include/spdk/keyring.h 00:01:54.181 TEST_HEADER include/spdk/jsonrpc.h 00:01:54.181 TEST_HEADER include/spdk/keyring_module.h 00:01:54.181 TEST_HEADER include/spdk/likely.h 00:01:54.181 TEST_HEADER include/spdk/log.h 00:01:54.181 TEST_HEADER include/spdk/lvol.h 00:01:54.181 TEST_HEADER include/spdk/mmio.h 00:01:54.181 TEST_HEADER include/spdk/memory.h 00:01:54.181 CC app/vhost/vhost.o 00:01:54.181 TEST_HEADER include/spdk/nbd.h 00:01:54.181 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:54.181 TEST_HEADER include/spdk/nvme.h 00:01:54.181 TEST_HEADER include/spdk/nvme_intel.h 00:01:54.181 TEST_HEADER include/spdk/notify.h 00:01:54.181 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:54.181 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:54.181 TEST_HEADER include/spdk/nvme_spec.h 00:01:54.181 TEST_HEADER include/spdk/nvme_zns.h 00:01:54.181 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:54.181 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:54.181 TEST_HEADER include/spdk/nvmf.h 00:01:54.181 TEST_HEADER include/spdk/nvmf_transport.h 00:01:54.181 TEST_HEADER include/spdk/nvmf_spec.h 00:01:54.181 TEST_HEADER include/spdk/opal.h 00:01:54.181 TEST_HEADER include/spdk/opal_spec.h 00:01:54.181 TEST_HEADER include/spdk/pci_ids.h 00:01:54.181 TEST_HEADER include/spdk/queue.h 00:01:54.181 TEST_HEADER include/spdk/pipe.h 00:01:54.181 TEST_HEADER include/spdk/reduce.h 00:01:54.181 TEST_HEADER include/spdk/rpc.h 00:01:54.181 TEST_HEADER include/spdk/scsi.h 00:01:54.181 TEST_HEADER include/spdk/scheduler.h 00:01:54.181 TEST_HEADER include/spdk/scsi_spec.h 00:01:54.181 TEST_HEADER include/spdk/sock.h 00:01:54.181 TEST_HEADER include/spdk/string.h 00:01:54.181 TEST_HEADER include/spdk/stdinc.h 00:01:54.181 TEST_HEADER include/spdk/trace.h 00:01:54.181 TEST_HEADER include/spdk/thread.h 00:01:54.181 TEST_HEADER include/spdk/trace_parser.h 00:01:54.181 TEST_HEADER include/spdk/ublk.h 00:01:54.181 TEST_HEADER include/spdk/tree.h 00:01:54.181 TEST_HEADER include/spdk/util.h 00:01:54.181 TEST_HEADER include/spdk/uuid.h 00:01:54.181 TEST_HEADER include/spdk/version.h 00:01:54.181 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:54.181 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:54.181 TEST_HEADER include/spdk/vhost.h 00:01:54.181 TEST_HEADER include/spdk/vmd.h 00:01:54.181 TEST_HEADER include/spdk/xor.h 00:01:54.181 TEST_HEADER include/spdk/zipf.h 00:01:54.181 CXX test/cpp_headers/accel.o 00:01:54.181 CXX test/cpp_headers/accel_module.o 00:01:54.181 CXX test/cpp_headers/assert.o 00:01:54.181 CXX test/cpp_headers/barrier.o 00:01:54.181 CXX test/cpp_headers/base64.o 00:01:54.181 CC app/spdk_tgt/spdk_tgt.o 00:01:54.181 CXX test/cpp_headers/bdev.o 00:01:54.181 CXX test/cpp_headers/bdev_module.o 00:01:54.181 CXX test/cpp_headers/bdev_zone.o 00:01:54.181 CXX test/cpp_headers/bit_array.o 00:01:54.181 CXX test/cpp_headers/bit_pool.o 00:01:54.181 CXX test/cpp_headers/blob_bdev.o 00:01:54.181 CXX test/cpp_headers/blobfs.o 00:01:54.181 CXX test/cpp_headers/blobfs_bdev.o 00:01:54.181 CXX test/cpp_headers/blob.o 00:01:54.181 CXX test/cpp_headers/config.o 00:01:54.181 CXX test/cpp_headers/conf.o 00:01:54.181 CXX test/cpp_headers/cpuset.o 00:01:54.181 CXX test/cpp_headers/crc16.o 00:01:54.181 CXX test/cpp_headers/crc32.o 00:01:54.181 CXX test/cpp_headers/crc64.o 00:01:54.181 CXX test/cpp_headers/dif.o 00:01:54.181 CXX test/cpp_headers/dma.o 00:01:54.181 CC examples/nvme/reconnect/reconnect.o 00:01:54.181 CC examples/ioat/perf/perf.o 00:01:54.181 CC test/app/histogram_perf/histogram_perf.o 00:01:54.181 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:54.448 CC examples/nvme/abort/abort.o 00:01:54.448 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:54.448 CC app/fio/nvme/fio_plugin.o 00:01:54.448 CC test/env/memory/memory_ut.o 00:01:54.448 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:54.448 CC examples/nvme/arbitration/arbitration.o 00:01:54.448 CC test/env/pci/pci_ut.o 00:01:54.448 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:54.448 CC test/nvme/aer/aer.o 00:01:54.448 CC examples/nvme/hello_world/hello_world.o 00:01:54.448 CC examples/ioat/verify/verify.o 00:01:54.448 CC examples/accel/perf/accel_perf.o 00:01:54.448 CC examples/nvme/hotplug/hotplug.o 00:01:54.448 CC test/app/jsoncat/jsoncat.o 00:01:54.448 CC test/app/stub/stub.o 00:01:54.448 CC examples/vmd/led/led.o 00:01:54.448 CC test/accel/dif/dif.o 00:01:54.448 CC test/event/reactor/reactor.o 00:01:54.448 CC test/event/reactor_perf/reactor_perf.o 00:01:54.448 CC examples/blob/hello_world/hello_blob.o 00:01:54.448 CC test/nvme/sgl/sgl.o 00:01:54.448 CC test/nvme/startup/startup.o 00:01:54.448 CC test/thread/poller_perf/poller_perf.o 00:01:54.448 CC test/nvme/simple_copy/simple_copy.o 00:01:54.448 CC examples/vmd/lsvmd/lsvmd.o 00:01:54.448 CC test/nvme/connect_stress/connect_stress.o 00:01:54.448 CC test/nvme/boot_partition/boot_partition.o 00:01:54.448 CC test/event/event_perf/event_perf.o 00:01:54.448 CC test/nvme/reset/reset.o 00:01:54.448 CC test/nvme/reserve/reserve.o 00:01:54.448 CC test/env/vtophys/vtophys.o 00:01:54.448 CC examples/sock/hello_world/hello_sock.o 00:01:54.448 CC test/nvme/e2edp/nvme_dp.o 00:01:54.448 CC test/blobfs/mkfs/mkfs.o 00:01:54.448 CC test/nvme/err_injection/err_injection.o 00:01:54.448 CC examples/thread/thread/thread_ex.o 00:01:54.448 CC examples/blob/cli/blobcli.o 00:01:54.448 CC examples/bdev/hello_world/hello_bdev.o 00:01:54.448 CC examples/util/zipf/zipf.o 00:01:54.448 CC test/nvme/overhead/overhead.o 00:01:54.448 CC test/bdev/bdevio/bdevio.o 00:01:54.448 CC examples/idxd/perf/perf.o 00:01:54.448 CC test/event/app_repeat/app_repeat.o 00:01:54.448 CC test/nvme/fused_ordering/fused_ordering.o 00:01:54.448 CC test/nvme/compliance/nvme_compliance.o 00:01:54.448 CC test/nvme/fdp/fdp.o 00:01:54.448 CC test/event/scheduler/scheduler.o 00:01:54.448 CC test/dma/test_dma/test_dma.o 00:01:54.448 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:54.448 CC examples/bdev/bdevperf/bdevperf.o 00:01:54.448 CC test/app/bdev_svc/bdev_svc.o 00:01:54.448 CC app/fio/bdev/fio_plugin.o 00:01:54.448 CC test/nvme/cuse/cuse.o 00:01:54.448 CC examples/nvmf/nvmf/nvmf.o 00:01:54.448 LINK spdk_lspci 00:01:54.448 LINK rpc_client_test 00:01:54.709 CC test/lvol/esnap/esnap.o 00:01:54.709 LINK interrupt_tgt 00:01:54.709 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:54.709 CC test/env/mem_callbacks/mem_callbacks.o 00:01:54.709 LINK vhost 00:01:54.709 LINK nvmf_tgt 00:01:54.709 LINK spdk_nvme_discover 00:01:54.709 LINK spdk_trace_record 00:01:54.709 LINK iscsi_tgt 00:01:54.709 LINK spdk_tgt 00:01:54.709 LINK histogram_perf 00:01:54.709 LINK jsoncat 00:01:54.709 LINK led 00:01:54.709 LINK reactor_perf 00:01:54.709 CXX test/cpp_headers/endian.o 00:01:54.709 LINK reactor 00:01:54.709 LINK cmb_copy 00:01:54.709 LINK lsvmd 00:01:54.709 LINK startup 00:01:54.709 LINK pmr_persistence 00:01:54.709 LINK event_perf 00:01:54.709 CXX test/cpp_headers/env.o 00:01:54.709 CXX test/cpp_headers/env_dpdk.o 00:01:54.709 CXX test/cpp_headers/event.o 00:01:54.709 LINK stub 00:01:54.709 CXX test/cpp_headers/fd_group.o 00:01:54.709 LINK env_dpdk_post_init 00:01:54.709 LINK app_repeat 00:01:54.709 CXX test/cpp_headers/fd.o 00:01:54.709 CXX test/cpp_headers/file.o 00:01:54.709 LINK vtophys 00:01:54.971 LINK poller_perf 00:01:54.971 CXX test/cpp_headers/ftl.o 00:01:54.971 CXX test/cpp_headers/gpt_spec.o 00:01:54.971 CXX test/cpp_headers/hexlify.o 00:01:54.971 CXX test/cpp_headers/histogram_data.o 00:01:54.971 LINK connect_stress 00:01:54.971 LINK zipf 00:01:54.971 CXX test/cpp_headers/idxd.o 00:01:54.971 CXX test/cpp_headers/idxd_spec.o 00:01:54.971 CXX test/cpp_headers/init.o 00:01:54.971 LINK mkfs 00:01:54.971 LINK doorbell_aers 00:01:54.971 LINK bdev_svc 00:01:54.971 LINK boot_partition 00:01:54.971 LINK ioat_perf 00:01:54.971 LINK spdk_dd 00:01:54.971 LINK hotplug 00:01:54.971 LINK err_injection 00:01:54.971 CXX test/cpp_headers/ioat_spec.o 00:01:54.971 CXX test/cpp_headers/ioat.o 00:01:54.971 LINK hello_blob 00:01:54.971 LINK verify 00:01:54.971 LINK hello_sock 00:01:54.971 CXX test/cpp_headers/iscsi_spec.o 00:01:54.971 CXX test/cpp_headers/json.o 00:01:54.971 CXX test/cpp_headers/jsonrpc.o 00:01:54.971 CXX test/cpp_headers/keyring.o 00:01:54.971 LINK reserve 00:01:54.971 LINK hello_world 00:01:54.971 LINK fused_ordering 00:01:54.971 LINK sgl 00:01:54.971 LINK hello_bdev 00:01:54.971 CXX test/cpp_headers/keyring_module.o 00:01:54.971 LINK simple_copy 00:01:54.971 LINK aer 00:01:54.971 LINK scheduler 00:01:54.971 CXX test/cpp_headers/likely.o 00:01:54.971 CXX test/cpp_headers/log.o 00:01:54.971 LINK reset 00:01:54.971 LINK thread 00:01:54.971 CXX test/cpp_headers/lvol.o 00:01:54.971 LINK nvme_dp 00:01:54.971 LINK overhead 00:01:54.971 CXX test/cpp_headers/memory.o 00:01:54.971 CXX test/cpp_headers/mmio.o 00:01:54.971 LINK reconnect 00:01:54.971 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:54.971 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:54.971 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:54.971 CXX test/cpp_headers/nbd.o 00:01:54.971 CXX test/cpp_headers/notify.o 00:01:54.971 CXX test/cpp_headers/nvme.o 00:01:54.971 LINK arbitration 00:01:54.971 LINK abort 00:01:54.971 CXX test/cpp_headers/nvme_intel.o 00:01:54.971 CXX test/cpp_headers/nvme_ocssd.o 00:01:54.971 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:55.230 CXX test/cpp_headers/nvme_spec.o 00:01:55.230 CXX test/cpp_headers/nvme_zns.o 00:01:55.230 CXX test/cpp_headers/nvmf_cmd.o 00:01:55.230 LINK fdp 00:01:55.230 LINK nvmf 00:01:55.230 LINK pci_ut 00:01:55.230 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:55.230 CXX test/cpp_headers/nvmf.o 00:01:55.230 LINK nvme_compliance 00:01:55.230 CXX test/cpp_headers/nvmf_spec.o 00:01:55.230 CXX test/cpp_headers/nvmf_transport.o 00:01:55.230 CXX test/cpp_headers/opal.o 00:01:55.230 CXX test/cpp_headers/opal_spec.o 00:01:55.230 LINK idxd_perf 00:01:55.230 LINK bdevio 00:01:55.230 LINK dif 00:01:55.230 LINK spdk_trace 00:01:55.230 CXX test/cpp_headers/pci_ids.o 00:01:55.230 CXX test/cpp_headers/queue.o 00:01:55.230 CXX test/cpp_headers/pipe.o 00:01:55.230 CXX test/cpp_headers/reduce.o 00:01:55.230 LINK accel_perf 00:01:55.230 CXX test/cpp_headers/rpc.o 00:01:55.230 CXX test/cpp_headers/scheduler.o 00:01:55.230 LINK test_dma 00:01:55.230 CXX test/cpp_headers/scsi.o 00:01:55.230 CXX test/cpp_headers/scsi_spec.o 00:01:55.230 CXX test/cpp_headers/sock.o 00:01:55.230 CXX test/cpp_headers/stdinc.o 00:01:55.230 CXX test/cpp_headers/string.o 00:01:55.230 CXX test/cpp_headers/thread.o 00:01:55.230 CXX test/cpp_headers/trace.o 00:01:55.230 CXX test/cpp_headers/trace_parser.o 00:01:55.230 CXX test/cpp_headers/tree.o 00:01:55.230 CXX test/cpp_headers/ublk.o 00:01:55.230 CXX test/cpp_headers/util.o 00:01:55.230 CXX test/cpp_headers/uuid.o 00:01:55.230 CXX test/cpp_headers/vfio_user_pci.o 00:01:55.230 CXX test/cpp_headers/version.o 00:01:55.230 CXX test/cpp_headers/vfio_user_spec.o 00:01:55.230 CXX test/cpp_headers/vhost.o 00:01:55.230 CXX test/cpp_headers/vmd.o 00:01:55.230 LINK nvme_manage 00:01:55.230 CXX test/cpp_headers/xor.o 00:01:55.230 CXX test/cpp_headers/zipf.o 00:01:55.488 LINK spdk_nvme 00:01:55.488 LINK blobcli 00:01:55.488 LINK nvme_fuzz 00:01:55.488 LINK spdk_nvme_identify 00:01:55.488 LINK spdk_nvme_perf 00:01:55.488 LINK spdk_bdev 00:01:55.488 LINK bdevperf 00:01:55.488 LINK spdk_top 00:01:55.747 LINK vhost_fuzz 00:01:55.747 LINK mem_callbacks 00:01:55.747 LINK memory_ut 00:01:55.747 LINK cuse 00:01:56.683 LINK iscsi_fuzz 00:01:58.060 LINK esnap 00:01:58.627 00:01:58.627 real 0m42.898s 00:01:58.627 user 6m34.701s 00:01:58.627 sys 3m35.596s 00:01:58.627 17:06:07 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:58.627 17:06:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.627 ************************************ 00:01:58.627 END TEST make 00:01:58.627 ************************************ 00:01:58.627 17:06:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:58.627 17:06:07 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:58.627 17:06:07 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:58.627 17:06:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.627 17:06:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:58.627 17:06:07 -- pm/common@45 -- $ pid=2815635 00:01:58.627 17:06:07 -- pm/common@52 -- $ sudo kill -TERM 2815635 00:01:58.627 17:06:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.627 17:06:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:58.627 17:06:07 -- pm/common@45 -- $ pid=2815637 00:01:58.627 17:06:07 -- pm/common@52 -- $ sudo kill -TERM 2815637 00:01:58.627 17:06:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.627 17:06:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:58.627 17:06:07 -- pm/common@45 -- $ pid=2815636 00:01:58.627 17:06:07 -- pm/common@52 -- $ sudo kill -TERM 2815636 00:01:58.627 17:06:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.627 17:06:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:58.627 17:06:07 -- pm/common@45 -- $ pid=2815638 00:01:58.627 17:06:07 -- pm/common@52 -- $ sudo kill -TERM 2815638 00:01:58.627 17:06:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:01:58.627 17:06:07 -- nvmf/common.sh@7 -- # uname -s 00:01:58.627 17:06:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:58.627 17:06:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:58.627 17:06:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:58.627 17:06:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:58.627 17:06:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:58.627 17:06:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:58.627 17:06:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:58.627 17:06:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:58.627 17:06:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:58.627 17:06:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:58.627 17:06:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:01:58.627 17:06:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:01:58.627 17:06:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:58.627 17:06:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:58.627 17:06:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:58.628 17:06:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:58.628 17:06:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:58.628 17:06:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:58.628 17:06:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:58.628 17:06:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:58.628 17:06:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.628 17:06:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.628 17:06:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.628 17:06:07 -- paths/export.sh@5 -- # export PATH 00:01:58.628 17:06:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.628 17:06:07 -- nvmf/common.sh@47 -- # : 0 00:01:58.628 17:06:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:58.628 17:06:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:58.628 17:06:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:58.628 17:06:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:58.628 17:06:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:58.628 17:06:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:58.628 17:06:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:58.628 17:06:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:58.628 17:06:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:58.628 17:06:07 -- spdk/autotest.sh@32 -- # uname -s 00:01:58.887 17:06:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:58.887 17:06:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:58.887 17:06:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:01:58.887 17:06:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:58.887 17:06:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:01:58.887 17:06:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:58.887 17:06:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:58.887 17:06:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:58.887 17:06:07 -- spdk/autotest.sh@48 -- # udevadm_pid=2873447 00:01:58.887 17:06:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:58.887 17:06:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:58.887 17:06:07 -- pm/common@17 -- # local monitor 00:01:58.887 17:06:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.887 17:06:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2873449 00:01:58.887 17:06:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.887 17:06:07 -- pm/common@21 -- # date +%s 00:01:58.887 17:06:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2873451 00:01:58.887 17:06:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.887 17:06:07 -- pm/common@21 -- # date +%s 00:01:58.887 17:06:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2873455 00:01:58.887 17:06:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.887 17:06:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2873459 00:01:58.887 17:06:07 -- pm/common@26 -- # sleep 1 00:01:58.887 17:06:07 -- pm/common@21 -- # date +%s 00:01:58.887 17:06:07 -- pm/common@21 -- # date +%s 00:01:58.887 17:06:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713971167 00:01:58.887 17:06:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713971167 00:01:58.887 17:06:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713971167 00:01:58.887 17:06:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713971167 00:01:58.887 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713971167_collect-vmstat.pm.log 00:01:58.887 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713971167_collect-bmc-pm.bmc.pm.log 00:01:58.887 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713971167_collect-cpu-load.pm.log 00:01:58.887 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713971167_collect-cpu-temp.pm.log 00:01:59.825 17:06:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:59.825 17:06:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:59.825 17:06:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:59.825 17:06:08 -- common/autotest_common.sh@10 -- # set +x 00:01:59.825 17:06:08 -- spdk/autotest.sh@59 -- # create_test_list 00:01:59.825 17:06:08 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:59.825 17:06:08 -- common/autotest_common.sh@10 -- # set +x 00:01:59.825 17:06:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:01:59.825 17:06:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:59.825 17:06:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:59.825 17:06:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:59.825 17:06:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:59.825 17:06:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:59.825 17:06:08 -- common/autotest_common.sh@1441 -- # uname 00:01:59.825 17:06:08 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:59.825 17:06:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:59.825 17:06:08 -- common/autotest_common.sh@1461 -- # uname 00:01:59.825 17:06:08 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:59.825 17:06:08 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:59.825 17:06:08 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:59.825 17:06:08 -- spdk/autotest.sh@72 -- # hash lcov 00:01:59.825 17:06:08 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:59.825 17:06:08 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:59.825 --rc lcov_branch_coverage=1 00:01:59.825 --rc lcov_function_coverage=1 00:01:59.825 --rc genhtml_branch_coverage=1 00:01:59.825 --rc genhtml_function_coverage=1 00:01:59.825 --rc genhtml_legend=1 00:01:59.825 --rc geninfo_all_blocks=1 00:01:59.825 ' 00:01:59.825 17:06:08 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:59.825 --rc lcov_branch_coverage=1 00:01:59.825 --rc lcov_function_coverage=1 00:01:59.825 --rc genhtml_branch_coverage=1 00:01:59.825 --rc genhtml_function_coverage=1 00:01:59.825 --rc genhtml_legend=1 00:01:59.825 --rc geninfo_all_blocks=1 00:01:59.825 ' 00:01:59.825 17:06:08 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:59.825 --rc lcov_branch_coverage=1 00:01:59.825 --rc lcov_function_coverage=1 00:01:59.825 --rc genhtml_branch_coverage=1 00:01:59.825 --rc genhtml_function_coverage=1 00:01:59.825 --rc genhtml_legend=1 00:01:59.825 --rc geninfo_all_blocks=1 00:01:59.825 --no-external' 00:01:59.825 17:06:08 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:59.825 --rc lcov_branch_coverage=1 00:01:59.825 --rc lcov_function_coverage=1 00:01:59.825 --rc genhtml_branch_coverage=1 00:01:59.825 --rc genhtml_function_coverage=1 00:01:59.825 --rc genhtml_legend=1 00:01:59.825 --rc geninfo_all_blocks=1 00:01:59.825 --no-external' 00:01:59.825 17:06:08 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:59.825 lcov: LCOV version 1.14 00:01:59.825 17:06:09 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:06.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:06.390 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:06.391 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:06.391 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:09.677 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:09.677 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:17.792 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:17.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:17.793 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:17.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:17.793 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:17.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:23.065 17:06:31 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:23.065 17:06:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:23.065 17:06:31 -- common/autotest_common.sh@10 -- # set +x 00:02:23.065 17:06:31 -- spdk/autotest.sh@91 -- # rm -f 00:02:23.065 17:06:31 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:24.973 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:02:24.973 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:24.973 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:24.973 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:24.973 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:24.973 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:24.973 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:24.973 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:24.973 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:24.973 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:24.973 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:24.973 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:25.232 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:25.232 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:25.232 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:25.232 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:25.232 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:25.232 17:06:34 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:25.232 17:06:34 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:25.232 17:06:34 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:25.232 17:06:34 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:25.232 17:06:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:25.232 17:06:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:25.232 17:06:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:25.232 17:06:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:25.232 17:06:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:25.232 17:06:34 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:25.232 17:06:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:25.232 17:06:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:25.232 17:06:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:25.232 17:06:34 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:25.232 17:06:34 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:25.232 No valid GPT data, bailing 00:02:25.232 17:06:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:25.232 17:06:34 -- scripts/common.sh@391 -- # pt= 00:02:25.232 17:06:34 -- scripts/common.sh@392 -- # return 1 00:02:25.232 17:06:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:25.232 1+0 records in 00:02:25.232 1+0 records out 00:02:25.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436217 s, 240 MB/s 00:02:25.232 17:06:34 -- spdk/autotest.sh@118 -- # sync 00:02:25.232 17:06:34 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:25.232 17:06:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:25.232 17:06:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:30.605 17:06:39 -- spdk/autotest.sh@124 -- # uname -s 00:02:30.605 17:06:39 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:30.605 17:06:39 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:30.605 17:06:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:30.605 17:06:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:30.605 17:06:39 -- common/autotest_common.sh@10 -- # set +x 00:02:30.605 ************************************ 00:02:30.605 START TEST setup.sh 00:02:30.605 ************************************ 00:02:30.605 17:06:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:30.605 * Looking for test storage... 00:02:30.605 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:30.605 17:06:39 -- setup/test-setup.sh@10 -- # uname -s 00:02:30.605 17:06:39 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:30.605 17:06:39 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:30.605 17:06:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:30.605 17:06:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:30.605 17:06:39 -- common/autotest_common.sh@10 -- # set +x 00:02:30.605 ************************************ 00:02:30.605 START TEST acl 00:02:30.605 ************************************ 00:02:30.605 17:06:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:30.605 * Looking for test storage... 00:02:30.605 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:30.605 17:06:39 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:30.605 17:06:39 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:30.605 17:06:39 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:30.605 17:06:39 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:30.605 17:06:39 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:30.605 17:06:39 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:30.605 17:06:39 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:30.605 17:06:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:30.605 17:06:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:30.605 17:06:39 -- setup/acl.sh@12 -- # devs=() 00:02:30.605 17:06:39 -- setup/acl.sh@12 -- # declare -a devs 00:02:30.605 17:06:39 -- setup/acl.sh@13 -- # drivers=() 00:02:30.605 17:06:39 -- setup/acl.sh@13 -- # declare -A drivers 00:02:30.605 17:06:39 -- setup/acl.sh@51 -- # setup reset 00:02:30.605 17:06:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:30.605 17:06:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:33.901 17:06:42 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:33.901 17:06:42 -- setup/acl.sh@16 -- # local dev driver 00:02:33.901 17:06:42 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.901 17:06:42 -- setup/acl.sh@15 -- # setup output status 00:02:33.901 17:06:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:33.901 17:06:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:36.435 Hugepages 00:02:36.435 node hugesize free / total 00:02:36.435 17:06:45 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:36.435 17:06:45 -- setup/acl.sh@19 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 00:02:36.436 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:36.436 17:06:45 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.436 17:06:45 -- setup/acl.sh@20 -- # continue 00:02:36.436 17:06:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.436 17:06:45 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:36.436 17:06:45 -- setup/acl.sh@54 -- # run_test denied denied 00:02:36.436 17:06:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:36.436 17:06:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:36.436 17:06:45 -- common/autotest_common.sh@10 -- # set +x 00:02:36.436 ************************************ 00:02:36.436 START TEST denied 00:02:36.436 ************************************ 00:02:36.436 17:06:45 -- common/autotest_common.sh@1111 -- # denied 00:02:36.436 17:06:45 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5f:00.0' 00:02:36.436 17:06:45 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5f:00.0' 00:02:36.436 17:06:45 -- setup/acl.sh@38 -- # setup output config 00:02:36.436 17:06:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.436 17:06:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:39.725 0000:5f:00.0 (8086 0a54): Skipping denied controller at 0000:5f:00.0 00:02:39.725 17:06:48 -- setup/acl.sh@40 -- # verify 0000:5f:00.0 00:02:39.725 17:06:48 -- setup/acl.sh@28 -- # local dev driver 00:02:39.725 17:06:48 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:39.725 17:06:48 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5f:00.0 ]] 00:02:39.725 17:06:48 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5f:00.0/driver 00:02:39.725 17:06:48 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:39.725 17:06:48 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:39.725 17:06:48 -- setup/acl.sh@41 -- # setup reset 00:02:39.725 17:06:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:39.725 17:06:48 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:43.910 00:02:43.910 real 0m6.801s 00:02:43.910 user 0m2.209s 00:02:43.910 sys 0m3.889s 00:02:43.910 17:06:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:43.910 17:06:52 -- common/autotest_common.sh@10 -- # set +x 00:02:43.910 ************************************ 00:02:43.910 END TEST denied 00:02:43.910 ************************************ 00:02:43.910 17:06:52 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:43.910 17:06:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:43.910 17:06:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:43.910 17:06:52 -- common/autotest_common.sh@10 -- # set +x 00:02:43.910 ************************************ 00:02:43.910 START TEST allowed 00:02:43.910 ************************************ 00:02:43.910 17:06:52 -- common/autotest_common.sh@1111 -- # allowed 00:02:43.910 17:06:52 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5f:00.0 00:02:43.910 17:06:52 -- setup/acl.sh@45 -- # setup output config 00:02:43.910 17:06:52 -- setup/acl.sh@46 -- # grep -E '0000:5f:00.0 .*: nvme -> .*' 00:02:43.910 17:06:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.910 17:06:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:48.096 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:02:48.096 17:06:56 -- setup/acl.sh@47 -- # verify 00:02:48.096 17:06:56 -- setup/acl.sh@28 -- # local dev driver 00:02:48.096 17:06:56 -- setup/acl.sh@48 -- # setup reset 00:02:48.096 17:06:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:48.096 17:06:56 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:51.380 00:02:51.380 real 0m7.407s 00:02:51.380 user 0m2.139s 00:02:51.380 sys 0m3.800s 00:02:51.380 17:06:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:51.380 17:06:59 -- common/autotest_common.sh@10 -- # set +x 00:02:51.380 ************************************ 00:02:51.380 END TEST allowed 00:02:51.380 ************************************ 00:02:51.380 00:02:51.380 real 0m20.388s 00:02:51.380 user 0m6.690s 00:02:51.380 sys 0m11.701s 00:02:51.380 17:07:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:51.380 17:07:00 -- common/autotest_common.sh@10 -- # set +x 00:02:51.380 ************************************ 00:02:51.380 END TEST acl 00:02:51.380 ************************************ 00:02:51.380 17:07:00 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:02:51.380 17:07:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:51.380 17:07:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:51.380 17:07:00 -- common/autotest_common.sh@10 -- # set +x 00:02:51.380 ************************************ 00:02:51.380 START TEST hugepages 00:02:51.380 ************************************ 00:02:51.380 17:07:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:02:51.380 * Looking for test storage... 00:02:51.380 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:51.380 17:07:00 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:51.380 17:07:00 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:51.380 17:07:00 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:51.380 17:07:00 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:51.380 17:07:00 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:51.380 17:07:00 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:51.380 17:07:00 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:51.380 17:07:00 -- setup/common.sh@18 -- # local node= 00:02:51.380 17:07:00 -- setup/common.sh@19 -- # local var val 00:02:51.380 17:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:51.380 17:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.380 17:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.380 17:07:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.380 17:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.380 17:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.380 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 164679840 kB' 'MemAvailable: 168749508 kB' 'Buffers: 4124 kB' 'Cached: 17993112 kB' 'SwapCached: 0 kB' 'Active: 14970492 kB' 'Inactive: 3718324 kB' 'Active(anon): 13727736 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 694968 kB' 'Mapped: 204616 kB' 'Shmem: 13036156 kB' 'KReclaimable: 506076 kB' 'Slab: 1158668 kB' 'SReclaimable: 506076 kB' 'SUnreclaim: 652592 kB' 'KernelStack: 20720 kB' 'PageTables: 10252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 15264852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316660 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.381 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.381 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # continue 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.382 17:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.382 17:07:00 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.382 17:07:00 -- setup/common.sh@33 -- # echo 2048 00:02:51.382 17:07:00 -- setup/common.sh@33 -- # return 0 00:02:51.382 17:07:00 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:51.382 17:07:00 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:51.382 17:07:00 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:51.382 17:07:00 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:51.382 17:07:00 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:51.382 17:07:00 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:51.382 17:07:00 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:51.382 17:07:00 -- setup/hugepages.sh@207 -- # get_nodes 00:02:51.382 17:07:00 -- setup/hugepages.sh@27 -- # local node 00:02:51.382 17:07:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.382 17:07:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:51.382 17:07:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.382 17:07:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:51.382 17:07:00 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.382 17:07:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.382 17:07:00 -- setup/hugepages.sh@208 -- # clear_hp 00:02:51.382 17:07:00 -- setup/hugepages.sh@37 -- # local node hp 00:02:51.382 17:07:00 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:51.382 17:07:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:51.382 17:07:00 -- setup/hugepages.sh@41 -- # echo 0 00:02:51.382 17:07:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:51.382 17:07:00 -- setup/hugepages.sh@41 -- # echo 0 00:02:51.382 17:07:00 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:51.382 17:07:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:51.382 17:07:00 -- setup/hugepages.sh@41 -- # echo 0 00:02:51.382 17:07:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:51.382 17:07:00 -- setup/hugepages.sh@41 -- # echo 0 00:02:51.382 17:07:00 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:51.382 17:07:00 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:51.382 17:07:00 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:51.382 17:07:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:51.382 17:07:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:51.382 17:07:00 -- common/autotest_common.sh@10 -- # set +x 00:02:51.382 ************************************ 00:02:51.382 START TEST default_setup 00:02:51.382 ************************************ 00:02:51.382 17:07:00 -- common/autotest_common.sh@1111 -- # default_setup 00:02:51.382 17:07:00 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:51.382 17:07:00 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:51.382 17:07:00 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:51.382 17:07:00 -- setup/hugepages.sh@51 -- # shift 00:02:51.382 17:07:00 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:51.382 17:07:00 -- setup/hugepages.sh@52 -- # local node_ids 00:02:51.382 17:07:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:51.382 17:07:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:51.382 17:07:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:51.382 17:07:00 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:51.382 17:07:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.382 17:07:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:51.382 17:07:00 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.382 17:07:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.382 17:07:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.382 17:07:00 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:51.382 17:07:00 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:51.382 17:07:00 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:51.382 17:07:00 -- setup/hugepages.sh@73 -- # return 0 00:02:51.382 17:07:00 -- setup/hugepages.sh@137 -- # setup output 00:02:51.382 17:07:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.382 17:07:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:53.913 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:53.913 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:55.304 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:02:55.566 17:07:04 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:55.566 17:07:04 -- setup/hugepages.sh@89 -- # local node 00:02:55.566 17:07:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:55.566 17:07:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:55.566 17:07:04 -- setup/hugepages.sh@92 -- # local surp 00:02:55.566 17:07:04 -- setup/hugepages.sh@93 -- # local resv 00:02:55.566 17:07:04 -- setup/hugepages.sh@94 -- # local anon 00:02:55.566 17:07:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:55.566 17:07:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:55.566 17:07:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:55.566 17:07:04 -- setup/common.sh@18 -- # local node= 00:02:55.566 17:07:04 -- setup/common.sh@19 -- # local var val 00:02:55.566 17:07:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.566 17:07:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.566 17:07:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.566 17:07:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.566 17:07:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.566 17:07:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166855136 kB' 'MemAvailable: 170924580 kB' 'Buffers: 4124 kB' 'Cached: 17993216 kB' 'SwapCached: 0 kB' 'Active: 14980828 kB' 'Inactive: 3718324 kB' 'Active(anon): 13738072 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705028 kB' 'Mapped: 204712 kB' 'Shmem: 13036260 kB' 'KReclaimable: 505628 kB' 'Slab: 1156544 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650916 kB' 'KernelStack: 20800 kB' 'PageTables: 10060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15273424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316740 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.566 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.566 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.567 17:07:04 -- setup/common.sh@33 -- # echo 0 00:02:55.567 17:07:04 -- setup/common.sh@33 -- # return 0 00:02:55.567 17:07:04 -- setup/hugepages.sh@97 -- # anon=0 00:02:55.567 17:07:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:55.567 17:07:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.567 17:07:04 -- setup/common.sh@18 -- # local node= 00:02:55.567 17:07:04 -- setup/common.sh@19 -- # local var val 00:02:55.567 17:07:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.567 17:07:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.567 17:07:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.567 17:07:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.567 17:07:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.567 17:07:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166856196 kB' 'MemAvailable: 170925640 kB' 'Buffers: 4124 kB' 'Cached: 17993216 kB' 'SwapCached: 0 kB' 'Active: 14980820 kB' 'Inactive: 3718324 kB' 'Active(anon): 13738064 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705020 kB' 'Mapped: 204640 kB' 'Shmem: 13036260 kB' 'KReclaimable: 505628 kB' 'Slab: 1156560 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650932 kB' 'KernelStack: 20944 kB' 'PageTables: 10644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15272040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316804 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.567 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.567 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.568 17:07:04 -- setup/common.sh@33 -- # echo 0 00:02:55.568 17:07:04 -- setup/common.sh@33 -- # return 0 00:02:55.568 17:07:04 -- setup/hugepages.sh@99 -- # surp=0 00:02:55.568 17:07:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:55.568 17:07:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:55.568 17:07:04 -- setup/common.sh@18 -- # local node= 00:02:55.568 17:07:04 -- setup/common.sh@19 -- # local var val 00:02:55.568 17:07:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.568 17:07:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.568 17:07:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.568 17:07:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.568 17:07:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.568 17:07:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.568 17:07:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166857180 kB' 'MemAvailable: 170926624 kB' 'Buffers: 4124 kB' 'Cached: 17993216 kB' 'SwapCached: 0 kB' 'Active: 14980924 kB' 'Inactive: 3718324 kB' 'Active(anon): 13738168 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705124 kB' 'Mapped: 204564 kB' 'Shmem: 13036260 kB' 'KReclaimable: 505628 kB' 'Slab: 1156568 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650940 kB' 'KernelStack: 20768 kB' 'PageTables: 9952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15273448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.568 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.568 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.569 17:07:04 -- setup/common.sh@33 -- # echo 0 00:02:55.569 17:07:04 -- setup/common.sh@33 -- # return 0 00:02:55.569 17:07:04 -- setup/hugepages.sh@100 -- # resv=0 00:02:55.569 17:07:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:55.569 nr_hugepages=1024 00:02:55.569 17:07:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:55.569 resv_hugepages=0 00:02:55.569 17:07:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:55.569 surplus_hugepages=0 00:02:55.569 17:07:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:55.569 anon_hugepages=0 00:02:55.569 17:07:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.569 17:07:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:55.569 17:07:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:55.569 17:07:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:55.569 17:07:04 -- setup/common.sh@18 -- # local node= 00:02:55.569 17:07:04 -- setup/common.sh@19 -- # local var val 00:02:55.569 17:07:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.569 17:07:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.569 17:07:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.569 17:07:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.569 17:07:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.569 17:07:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166853996 kB' 'MemAvailable: 170923440 kB' 'Buffers: 4124 kB' 'Cached: 17993236 kB' 'SwapCached: 0 kB' 'Active: 14980952 kB' 'Inactive: 3718324 kB' 'Active(anon): 13738196 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705672 kB' 'Mapped: 204564 kB' 'Shmem: 13036280 kB' 'KReclaimable: 505628 kB' 'Slab: 1156568 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650940 kB' 'KernelStack: 20912 kB' 'PageTables: 10264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15273464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316916 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.569 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.569 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.570 17:07:04 -- setup/common.sh@33 -- # echo 1024 00:02:55.570 17:07:04 -- setup/common.sh@33 -- # return 0 00:02:55.570 17:07:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.570 17:07:04 -- setup/hugepages.sh@112 -- # get_nodes 00:02:55.570 17:07:04 -- setup/hugepages.sh@27 -- # local node 00:02:55.570 17:07:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.570 17:07:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:55.570 17:07:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.570 17:07:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:55.570 17:07:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:55.570 17:07:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:55.570 17:07:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.570 17:07:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.570 17:07:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:55.570 17:07:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.570 17:07:04 -- setup/common.sh@18 -- # local node=0 00:02:55.570 17:07:04 -- setup/common.sh@19 -- # local var val 00:02:55.570 17:07:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.570 17:07:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.570 17:07:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:55.570 17:07:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:55.570 17:07:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.570 17:07:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 89308076 kB' 'MemUsed: 8354608 kB' 'SwapCached: 0 kB' 'Active: 4712872 kB' 'Inactive: 326360 kB' 'Active(anon): 3998836 kB' 'Inactive(anon): 0 kB' 'Active(file): 714036 kB' 'Inactive(file): 326360 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4515964 kB' 'Mapped: 92848 kB' 'AnonPages: 526396 kB' 'Shmem: 3475568 kB' 'KernelStack: 12200 kB' 'PageTables: 5584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214128 kB' 'Slab: 508888 kB' 'SReclaimable: 214128 kB' 'SUnreclaim: 294760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.570 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.570 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # continue 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.571 17:07:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.571 17:07:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.571 17:07:04 -- setup/common.sh@33 -- # echo 0 00:02:55.571 17:07:04 -- setup/common.sh@33 -- # return 0 00:02:55.571 17:07:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.571 17:07:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.571 17:07:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.571 17:07:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.571 17:07:04 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:55.571 node0=1024 expecting 1024 00:02:55.571 17:07:04 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:55.571 00:02:55.571 real 0m4.299s 00:02:55.571 user 0m1.141s 00:02:55.571 sys 0m1.755s 00:02:55.571 17:07:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:55.571 17:07:04 -- common/autotest_common.sh@10 -- # set +x 00:02:55.571 ************************************ 00:02:55.571 END TEST default_setup 00:02:55.571 ************************************ 00:02:55.571 17:07:04 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:55.571 17:07:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:55.571 17:07:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:55.571 17:07:04 -- common/autotest_common.sh@10 -- # set +x 00:02:55.829 ************************************ 00:02:55.829 START TEST per_node_1G_alloc 00:02:55.829 ************************************ 00:02:55.829 17:07:04 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:02:55.829 17:07:04 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:55.829 17:07:04 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:55.829 17:07:04 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:55.829 17:07:04 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:55.829 17:07:04 -- setup/hugepages.sh@51 -- # shift 00:02:55.829 17:07:04 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:55.829 17:07:04 -- setup/hugepages.sh@52 -- # local node_ids 00:02:55.829 17:07:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:55.829 17:07:04 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:55.829 17:07:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:55.829 17:07:04 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:55.829 17:07:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:55.829 17:07:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:55.829 17:07:04 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:55.829 17:07:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:55.829 17:07:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:55.829 17:07:04 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:55.829 17:07:04 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:55.829 17:07:04 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:55.829 17:07:04 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:55.829 17:07:04 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:55.829 17:07:04 -- setup/hugepages.sh@73 -- # return 0 00:02:55.829 17:07:04 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:55.829 17:07:04 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:55.829 17:07:04 -- setup/hugepages.sh@146 -- # setup output 00:02:55.829 17:07:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.829 17:07:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:58.365 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:58.365 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:58.365 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:58.365 17:07:07 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:58.365 17:07:07 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:58.365 17:07:07 -- setup/hugepages.sh@89 -- # local node 00:02:58.365 17:07:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:58.365 17:07:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:58.365 17:07:07 -- setup/hugepages.sh@92 -- # local surp 00:02:58.365 17:07:07 -- setup/hugepages.sh@93 -- # local resv 00:02:58.365 17:07:07 -- setup/hugepages.sh@94 -- # local anon 00:02:58.365 17:07:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:58.365 17:07:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:58.365 17:07:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:58.365 17:07:07 -- setup/common.sh@18 -- # local node= 00:02:58.365 17:07:07 -- setup/common.sh@19 -- # local var val 00:02:58.365 17:07:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.365 17:07:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.365 17:07:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.365 17:07:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.365 17:07:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.365 17:07:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166888300 kB' 'MemAvailable: 170957744 kB' 'Buffers: 4124 kB' 'Cached: 17993332 kB' 'SwapCached: 0 kB' 'Active: 14983132 kB' 'Inactive: 3718324 kB' 'Active(anon): 13740376 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 707284 kB' 'Mapped: 204812 kB' 'Shmem: 13036376 kB' 'KReclaimable: 505628 kB' 'Slab: 1157324 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 651696 kB' 'KernelStack: 20992 kB' 'PageTables: 11092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15274572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316980 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.365 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.365 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.366 17:07:07 -- setup/common.sh@33 -- # echo 0 00:02:58.366 17:07:07 -- setup/common.sh@33 -- # return 0 00:02:58.366 17:07:07 -- setup/hugepages.sh@97 -- # anon=0 00:02:58.366 17:07:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:58.366 17:07:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.366 17:07:07 -- setup/common.sh@18 -- # local node= 00:02:58.366 17:07:07 -- setup/common.sh@19 -- # local var val 00:02:58.366 17:07:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.366 17:07:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.366 17:07:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.366 17:07:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.366 17:07:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.366 17:07:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166893660 kB' 'MemAvailable: 170963104 kB' 'Buffers: 4124 kB' 'Cached: 17993332 kB' 'SwapCached: 0 kB' 'Active: 14983800 kB' 'Inactive: 3718324 kB' 'Active(anon): 13741044 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 707880 kB' 'Mapped: 204740 kB' 'Shmem: 13036376 kB' 'KReclaimable: 505628 kB' 'Slab: 1157324 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 651696 kB' 'KernelStack: 21040 kB' 'PageTables: 11044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15274584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316948 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.366 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.366 17:07:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.367 17:07:07 -- setup/common.sh@33 -- # echo 0 00:02:58.367 17:07:07 -- setup/common.sh@33 -- # return 0 00:02:58.367 17:07:07 -- setup/hugepages.sh@99 -- # surp=0 00:02:58.367 17:07:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:58.367 17:07:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:58.367 17:07:07 -- setup/common.sh@18 -- # local node= 00:02:58.367 17:07:07 -- setup/common.sh@19 -- # local var val 00:02:58.367 17:07:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.367 17:07:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.367 17:07:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.367 17:07:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.367 17:07:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.367 17:07:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166893312 kB' 'MemAvailable: 170962756 kB' 'Buffers: 4124 kB' 'Cached: 17993336 kB' 'SwapCached: 0 kB' 'Active: 14984016 kB' 'Inactive: 3718324 kB' 'Active(anon): 13741260 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 708648 kB' 'Mapped: 204664 kB' 'Shmem: 13036380 kB' 'KReclaimable: 505628 kB' 'Slab: 1157292 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 651664 kB' 'KernelStack: 21040 kB' 'PageTables: 11084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15274600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316964 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.367 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.367 17:07:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.368 17:07:07 -- setup/common.sh@33 -- # echo 0 00:02:58.368 17:07:07 -- setup/common.sh@33 -- # return 0 00:02:58.368 17:07:07 -- setup/hugepages.sh@100 -- # resv=0 00:02:58.368 17:07:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:58.368 nr_hugepages=1024 00:02:58.368 17:07:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:58.368 resv_hugepages=0 00:02:58.368 17:07:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:58.368 surplus_hugepages=0 00:02:58.368 17:07:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:58.368 anon_hugepages=0 00:02:58.368 17:07:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:58.368 17:07:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:58.368 17:07:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:58.368 17:07:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:58.368 17:07:07 -- setup/common.sh@18 -- # local node= 00:02:58.368 17:07:07 -- setup/common.sh@19 -- # local var val 00:02:58.368 17:07:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.368 17:07:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.368 17:07:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.368 17:07:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.368 17:07:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.368 17:07:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166895224 kB' 'MemAvailable: 170964668 kB' 'Buffers: 4124 kB' 'Cached: 17993360 kB' 'SwapCached: 0 kB' 'Active: 14982648 kB' 'Inactive: 3718324 kB' 'Active(anon): 13739892 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 706724 kB' 'Mapped: 204608 kB' 'Shmem: 13036404 kB' 'KReclaimable: 505628 kB' 'Slab: 1157292 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 651664 kB' 'KernelStack: 20928 kB' 'PageTables: 10184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15274612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316900 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.368 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.368 17:07:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.369 17:07:07 -- setup/common.sh@33 -- # echo 1024 00:02:58.369 17:07:07 -- setup/common.sh@33 -- # return 0 00:02:58.369 17:07:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:58.369 17:07:07 -- setup/hugepages.sh@112 -- # get_nodes 00:02:58.369 17:07:07 -- setup/hugepages.sh@27 -- # local node 00:02:58.369 17:07:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.369 17:07:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:58.369 17:07:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.369 17:07:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:58.369 17:07:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:58.369 17:07:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:58.369 17:07:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:58.369 17:07:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:58.369 17:07:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:58.369 17:07:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.369 17:07:07 -- setup/common.sh@18 -- # local node=0 00:02:58.369 17:07:07 -- setup/common.sh@19 -- # local var val 00:02:58.369 17:07:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.369 17:07:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.369 17:07:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:58.369 17:07:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:58.369 17:07:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.369 17:07:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 90385124 kB' 'MemUsed: 7277560 kB' 'SwapCached: 0 kB' 'Active: 4712196 kB' 'Inactive: 326360 kB' 'Active(anon): 3998160 kB' 'Inactive(anon): 0 kB' 'Active(file): 714036 kB' 'Inactive(file): 326360 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4516056 kB' 'Mapped: 92864 kB' 'AnonPages: 525640 kB' 'Shmem: 3475660 kB' 'KernelStack: 12136 kB' 'PageTables: 5264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214128 kB' 'Slab: 509716 kB' 'SReclaimable: 214128 kB' 'SUnreclaim: 295588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.369 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.369 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@33 -- # echo 0 00:02:58.370 17:07:07 -- setup/common.sh@33 -- # return 0 00:02:58.370 17:07:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:58.370 17:07:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:58.370 17:07:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:58.370 17:07:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:58.370 17:07:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.370 17:07:07 -- setup/common.sh@18 -- # local node=1 00:02:58.370 17:07:07 -- setup/common.sh@19 -- # local var val 00:02:58.370 17:07:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.370 17:07:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.370 17:07:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:58.370 17:07:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:58.370 17:07:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.370 17:07:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 76512388 kB' 'MemUsed: 17206080 kB' 'SwapCached: 0 kB' 'Active: 10270420 kB' 'Inactive: 3391964 kB' 'Active(anon): 9741700 kB' 'Inactive(anon): 0 kB' 'Active(file): 528720 kB' 'Inactive(file): 3391964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13481444 kB' 'Mapped: 111744 kB' 'AnonPages: 181008 kB' 'Shmem: 9560760 kB' 'KernelStack: 8664 kB' 'PageTables: 5160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 291500 kB' 'Slab: 647576 kB' 'SReclaimable: 291500 kB' 'SUnreclaim: 356076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.370 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.370 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.371 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.371 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.371 17:07:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.371 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.371 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.371 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.371 17:07:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.371 17:07:07 -- setup/common.sh@32 -- # continue 00:02:58.371 17:07:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.371 17:07:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.371 17:07:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.371 17:07:07 -- setup/common.sh@33 -- # echo 0 00:02:58.371 17:07:07 -- setup/common.sh@33 -- # return 0 00:02:58.371 17:07:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:58.371 17:07:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:58.371 17:07:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:58.371 17:07:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:58.371 17:07:07 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:58.371 node0=512 expecting 512 00:02:58.371 17:07:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:58.371 17:07:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:58.371 17:07:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:58.371 17:07:07 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:58.371 node1=512 expecting 512 00:02:58.371 17:07:07 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:58.371 00:02:58.371 real 0m2.617s 00:02:58.371 user 0m1.040s 00:02:58.371 sys 0m1.592s 00:02:58.371 17:07:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:58.371 17:07:07 -- common/autotest_common.sh@10 -- # set +x 00:02:58.371 ************************************ 00:02:58.371 END TEST per_node_1G_alloc 00:02:58.371 ************************************ 00:02:58.371 17:07:07 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:58.371 17:07:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:58.371 17:07:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:58.371 17:07:07 -- common/autotest_common.sh@10 -- # set +x 00:02:58.629 ************************************ 00:02:58.629 START TEST even_2G_alloc 00:02:58.629 ************************************ 00:02:58.629 17:07:07 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:02:58.629 17:07:07 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:58.629 17:07:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:58.629 17:07:07 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:58.629 17:07:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:58.629 17:07:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:58.629 17:07:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:58.629 17:07:07 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:58.629 17:07:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:58.629 17:07:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:58.629 17:07:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:58.629 17:07:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:58.629 17:07:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:58.629 17:07:07 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:58.629 17:07:07 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:58.629 17:07:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:58.629 17:07:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:58.629 17:07:07 -- setup/hugepages.sh@83 -- # : 512 00:02:58.629 17:07:07 -- setup/hugepages.sh@84 -- # : 1 00:02:58.629 17:07:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:58.629 17:07:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:58.629 17:07:07 -- setup/hugepages.sh@83 -- # : 0 00:02:58.629 17:07:07 -- setup/hugepages.sh@84 -- # : 0 00:02:58.629 17:07:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:58.629 17:07:07 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:58.629 17:07:07 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:58.629 17:07:07 -- setup/hugepages.sh@153 -- # setup output 00:02:58.629 17:07:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.629 17:07:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:01.161 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:01.161 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:01.161 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:01.161 17:07:10 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:01.161 17:07:10 -- setup/hugepages.sh@89 -- # local node 00:03:01.161 17:07:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:01.161 17:07:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:01.161 17:07:10 -- setup/hugepages.sh@92 -- # local surp 00:03:01.161 17:07:10 -- setup/hugepages.sh@93 -- # local resv 00:03:01.161 17:07:10 -- setup/hugepages.sh@94 -- # local anon 00:03:01.161 17:07:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:01.161 17:07:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:01.161 17:07:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:01.161 17:07:10 -- setup/common.sh@18 -- # local node= 00:03:01.161 17:07:10 -- setup/common.sh@19 -- # local var val 00:03:01.161 17:07:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.161 17:07:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.161 17:07:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.161 17:07:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.161 17:07:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.161 17:07:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166905084 kB' 'MemAvailable: 170974528 kB' 'Buffers: 4124 kB' 'Cached: 17993436 kB' 'SwapCached: 0 kB' 'Active: 14983660 kB' 'Inactive: 3718324 kB' 'Active(anon): 13740904 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 708088 kB' 'Mapped: 203592 kB' 'Shmem: 13036480 kB' 'KReclaimable: 505628 kB' 'Slab: 1156396 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650768 kB' 'KernelStack: 21216 kB' 'PageTables: 11016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15267572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316980 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.161 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.161 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.162 17:07:10 -- setup/common.sh@33 -- # echo 0 00:03:01.162 17:07:10 -- setup/common.sh@33 -- # return 0 00:03:01.162 17:07:10 -- setup/hugepages.sh@97 -- # anon=0 00:03:01.162 17:07:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:01.162 17:07:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.162 17:07:10 -- setup/common.sh@18 -- # local node= 00:03:01.162 17:07:10 -- setup/common.sh@19 -- # local var val 00:03:01.162 17:07:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.162 17:07:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.162 17:07:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.162 17:07:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.162 17:07:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.162 17:07:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.162 17:07:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166909436 kB' 'MemAvailable: 170978880 kB' 'Buffers: 4124 kB' 'Cached: 17993436 kB' 'SwapCached: 0 kB' 'Active: 14983628 kB' 'Inactive: 3718324 kB' 'Active(anon): 13740872 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 707584 kB' 'Mapped: 203536 kB' 'Shmem: 13036480 kB' 'KReclaimable: 505628 kB' 'Slab: 1156236 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650608 kB' 'KernelStack: 21088 kB' 'PageTables: 11160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15267584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316948 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.162 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.162 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.163 17:07:10 -- setup/common.sh@33 -- # echo 0 00:03:01.163 17:07:10 -- setup/common.sh@33 -- # return 0 00:03:01.163 17:07:10 -- setup/hugepages.sh@99 -- # surp=0 00:03:01.163 17:07:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:01.163 17:07:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:01.163 17:07:10 -- setup/common.sh@18 -- # local node= 00:03:01.163 17:07:10 -- setup/common.sh@19 -- # local var val 00:03:01.163 17:07:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.163 17:07:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.163 17:07:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.163 17:07:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.163 17:07:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.163 17:07:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.163 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.163 17:07:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166907940 kB' 'MemAvailable: 170977384 kB' 'Buffers: 4124 kB' 'Cached: 17993452 kB' 'SwapCached: 0 kB' 'Active: 14982232 kB' 'Inactive: 3718324 kB' 'Active(anon): 13739476 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 706112 kB' 'Mapped: 203472 kB' 'Shmem: 13036496 kB' 'KReclaimable: 505628 kB' 'Slab: 1156260 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650632 kB' 'KernelStack: 21024 kB' 'PageTables: 10904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15267600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316948 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.163 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.164 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.164 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.165 17:07:10 -- setup/common.sh@33 -- # echo 0 00:03:01.165 17:07:10 -- setup/common.sh@33 -- # return 0 00:03:01.165 17:07:10 -- setup/hugepages.sh@100 -- # resv=0 00:03:01.165 17:07:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:01.165 nr_hugepages=1024 00:03:01.165 17:07:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:01.165 resv_hugepages=0 00:03:01.165 17:07:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:01.165 surplus_hugepages=0 00:03:01.165 17:07:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:01.165 anon_hugepages=0 00:03:01.165 17:07:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.165 17:07:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:01.165 17:07:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:01.165 17:07:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:01.165 17:07:10 -- setup/common.sh@18 -- # local node= 00:03:01.165 17:07:10 -- setup/common.sh@19 -- # local var val 00:03:01.165 17:07:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.165 17:07:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.165 17:07:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.165 17:07:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.165 17:07:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.165 17:07:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.165 17:07:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166909368 kB' 'MemAvailable: 170978812 kB' 'Buffers: 4124 kB' 'Cached: 17993464 kB' 'SwapCached: 0 kB' 'Active: 14981568 kB' 'Inactive: 3718324 kB' 'Active(anon): 13738812 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705460 kB' 'Mapped: 203472 kB' 'Shmem: 13036508 kB' 'KReclaimable: 505628 kB' 'Slab: 1156260 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650632 kB' 'KernelStack: 20800 kB' 'PageTables: 9928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15264976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316740 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.165 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.165 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.424 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.424 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.424 17:07:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.424 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.424 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.424 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.424 17:07:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.424 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.424 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.424 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.424 17:07:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.424 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.425 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.425 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.426 17:07:10 -- setup/common.sh@33 -- # echo 1024 00:03:01.426 17:07:10 -- setup/common.sh@33 -- # return 0 00:03:01.426 17:07:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.426 17:07:10 -- setup/hugepages.sh@112 -- # get_nodes 00:03:01.426 17:07:10 -- setup/hugepages.sh@27 -- # local node 00:03:01.426 17:07:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.426 17:07:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:01.426 17:07:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.426 17:07:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:01.426 17:07:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.426 17:07:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.426 17:07:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.426 17:07:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.426 17:07:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:01.426 17:07:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.426 17:07:10 -- setup/common.sh@18 -- # local node=0 00:03:01.426 17:07:10 -- setup/common.sh@19 -- # local var val 00:03:01.426 17:07:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.426 17:07:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.426 17:07:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:01.426 17:07:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:01.426 17:07:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.426 17:07:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 90388980 kB' 'MemUsed: 7273704 kB' 'SwapCached: 0 kB' 'Active: 4710852 kB' 'Inactive: 326360 kB' 'Active(anon): 3996816 kB' 'Inactive(anon): 0 kB' 'Active(file): 714036 kB' 'Inactive(file): 326360 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4516140 kB' 'Mapped: 91756 kB' 'AnonPages: 524184 kB' 'Shmem: 3475744 kB' 'KernelStack: 12216 kB' 'PageTables: 5500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214128 kB' 'Slab: 508676 kB' 'SReclaimable: 214128 kB' 'SUnreclaim: 294548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.426 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.426 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@33 -- # echo 0 00:03:01.427 17:07:10 -- setup/common.sh@33 -- # return 0 00:03:01.427 17:07:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.427 17:07:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.427 17:07:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.427 17:07:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:01.427 17:07:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.427 17:07:10 -- setup/common.sh@18 -- # local node=1 00:03:01.427 17:07:10 -- setup/common.sh@19 -- # local var val 00:03:01.427 17:07:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.427 17:07:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.427 17:07:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:01.427 17:07:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:01.427 17:07:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.427 17:07:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 76520848 kB' 'MemUsed: 17197620 kB' 'SwapCached: 0 kB' 'Active: 10270340 kB' 'Inactive: 3391964 kB' 'Active(anon): 9741620 kB' 'Inactive(anon): 0 kB' 'Active(file): 528720 kB' 'Inactive(file): 3391964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13481464 kB' 'Mapped: 111704 kB' 'AnonPages: 180904 kB' 'Shmem: 9560780 kB' 'KernelStack: 8584 kB' 'PageTables: 4760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 291500 kB' 'Slab: 647712 kB' 'SReclaimable: 291500 kB' 'SUnreclaim: 356212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.427 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.427 17:07:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # continue 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.428 17:07:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.428 17:07:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.428 17:07:10 -- setup/common.sh@33 -- # echo 0 00:03:01.428 17:07:10 -- setup/common.sh@33 -- # return 0 00:03:01.428 17:07:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.428 17:07:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.428 17:07:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.428 17:07:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.428 17:07:10 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:01.428 node0=512 expecting 512 00:03:01.428 17:07:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.428 17:07:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.428 17:07:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.428 17:07:10 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:01.428 node1=512 expecting 512 00:03:01.428 17:07:10 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:01.428 00:03:01.428 real 0m2.816s 00:03:01.428 user 0m1.177s 00:03:01.428 sys 0m1.703s 00:03:01.428 17:07:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:01.428 17:07:10 -- common/autotest_common.sh@10 -- # set +x 00:03:01.428 ************************************ 00:03:01.428 END TEST even_2G_alloc 00:03:01.428 ************************************ 00:03:01.428 17:07:10 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:01.428 17:07:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:01.428 17:07:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:01.428 17:07:10 -- common/autotest_common.sh@10 -- # set +x 00:03:01.428 ************************************ 00:03:01.428 START TEST odd_alloc 00:03:01.428 ************************************ 00:03:01.428 17:07:10 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:01.428 17:07:10 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:01.428 17:07:10 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:01.428 17:07:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:01.428 17:07:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.428 17:07:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:01.428 17:07:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:01.428 17:07:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:01.428 17:07:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.428 17:07:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:01.428 17:07:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.428 17:07:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.428 17:07:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.428 17:07:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:01.428 17:07:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:01.428 17:07:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.428 17:07:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:01.428 17:07:10 -- setup/hugepages.sh@83 -- # : 513 00:03:01.428 17:07:10 -- setup/hugepages.sh@84 -- # : 1 00:03:01.428 17:07:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.428 17:07:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:01.428 17:07:10 -- setup/hugepages.sh@83 -- # : 0 00:03:01.428 17:07:10 -- setup/hugepages.sh@84 -- # : 0 00:03:01.428 17:07:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.428 17:07:10 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:01.428 17:07:10 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:01.428 17:07:10 -- setup/hugepages.sh@160 -- # setup output 00:03:01.428 17:07:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.428 17:07:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:04.717 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:04.717 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:04.717 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:04.717 17:07:13 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:04.717 17:07:13 -- setup/hugepages.sh@89 -- # local node 00:03:04.717 17:07:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:04.717 17:07:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:04.717 17:07:13 -- setup/hugepages.sh@92 -- # local surp 00:03:04.717 17:07:13 -- setup/hugepages.sh@93 -- # local resv 00:03:04.717 17:07:13 -- setup/hugepages.sh@94 -- # local anon 00:03:04.717 17:07:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:04.717 17:07:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:04.717 17:07:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:04.717 17:07:13 -- setup/common.sh@18 -- # local node= 00:03:04.717 17:07:13 -- setup/common.sh@19 -- # local var val 00:03:04.717 17:07:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.717 17:07:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.717 17:07:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.717 17:07:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.717 17:07:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.717 17:07:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166889084 kB' 'MemAvailable: 170958528 kB' 'Buffers: 4124 kB' 'Cached: 17993556 kB' 'SwapCached: 0 kB' 'Active: 14981088 kB' 'Inactive: 3718324 kB' 'Active(anon): 13738332 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 704484 kB' 'Mapped: 203688 kB' 'Shmem: 13036600 kB' 'KReclaimable: 505628 kB' 'Slab: 1156332 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650704 kB' 'KernelStack: 20720 kB' 'PageTables: 10036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 15265448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316772 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 17:07:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.718 17:07:13 -- setup/common.sh@33 -- # echo 0 00:03:04.718 17:07:13 -- setup/common.sh@33 -- # return 0 00:03:04.718 17:07:13 -- setup/hugepages.sh@97 -- # anon=0 00:03:04.718 17:07:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:04.718 17:07:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.718 17:07:13 -- setup/common.sh@18 -- # local node= 00:03:04.718 17:07:13 -- setup/common.sh@19 -- # local var val 00:03:04.718 17:07:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.718 17:07:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.718 17:07:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.718 17:07:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.718 17:07:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.718 17:07:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166888900 kB' 'MemAvailable: 170958344 kB' 'Buffers: 4124 kB' 'Cached: 17993560 kB' 'SwapCached: 0 kB' 'Active: 14980344 kB' 'Inactive: 3718324 kB' 'Active(anon): 13737588 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 704200 kB' 'Mapped: 203564 kB' 'Shmem: 13036604 kB' 'KReclaimable: 505628 kB' 'Slab: 1156312 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650684 kB' 'KernelStack: 20704 kB' 'PageTables: 9996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 15265456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316740 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 17:07:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 17:07:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.720 17:07:13 -- setup/common.sh@33 -- # echo 0 00:03:04.720 17:07:13 -- setup/common.sh@33 -- # return 0 00:03:04.720 17:07:13 -- setup/hugepages.sh@99 -- # surp=0 00:03:04.720 17:07:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:04.720 17:07:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:04.720 17:07:13 -- setup/common.sh@18 -- # local node= 00:03:04.720 17:07:13 -- setup/common.sh@19 -- # local var val 00:03:04.720 17:07:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.720 17:07:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.720 17:07:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.720 17:07:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.720 17:07:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.720 17:07:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166888900 kB' 'MemAvailable: 170958344 kB' 'Buffers: 4124 kB' 'Cached: 17993568 kB' 'SwapCached: 0 kB' 'Active: 14980244 kB' 'Inactive: 3718324 kB' 'Active(anon): 13737488 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 704048 kB' 'Mapped: 203564 kB' 'Shmem: 13036612 kB' 'KReclaimable: 505628 kB' 'Slab: 1156312 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650684 kB' 'KernelStack: 20688 kB' 'PageTables: 9940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 15265472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316740 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.720 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 17:07:13 -- setup/common.sh@33 -- # echo 0 00:03:04.721 17:07:13 -- setup/common.sh@33 -- # return 0 00:03:04.721 17:07:13 -- setup/hugepages.sh@100 -- # resv=0 00:03:04.721 17:07:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:04.721 nr_hugepages=1025 00:03:04.721 17:07:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:04.721 resv_hugepages=0 00:03:04.721 17:07:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:04.721 surplus_hugepages=0 00:03:04.721 17:07:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:04.721 anon_hugepages=0 00:03:04.721 17:07:13 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:04.721 17:07:13 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:04.721 17:07:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:04.722 17:07:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:04.722 17:07:13 -- setup/common.sh@18 -- # local node= 00:03:04.722 17:07:13 -- setup/common.sh@19 -- # local var val 00:03:04.722 17:07:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.722 17:07:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.722 17:07:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.722 17:07:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.722 17:07:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.722 17:07:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.722 17:07:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166888900 kB' 'MemAvailable: 170958344 kB' 'Buffers: 4124 kB' 'Cached: 17993572 kB' 'SwapCached: 0 kB' 'Active: 14979904 kB' 'Inactive: 3718324 kB' 'Active(anon): 13737148 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 703704 kB' 'Mapped: 203564 kB' 'Shmem: 13036616 kB' 'KReclaimable: 505628 kB' 'Slab: 1156312 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650684 kB' 'KernelStack: 20672 kB' 'PageTables: 9884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 15265488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316740 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 17:07:13 -- setup/common.sh@33 -- # echo 1025 00:03:04.723 17:07:13 -- setup/common.sh@33 -- # return 0 00:03:04.723 17:07:13 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:04.723 17:07:13 -- setup/hugepages.sh@112 -- # get_nodes 00:03:04.723 17:07:13 -- setup/hugepages.sh@27 -- # local node 00:03:04.723 17:07:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.723 17:07:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:04.723 17:07:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.723 17:07:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:04.723 17:07:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.723 17:07:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.723 17:07:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.723 17:07:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.723 17:07:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:04.723 17:07:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.723 17:07:13 -- setup/common.sh@18 -- # local node=0 00:03:04.723 17:07:13 -- setup/common.sh@19 -- # local var val 00:03:04.723 17:07:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.723 17:07:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.723 17:07:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:04.723 17:07:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:04.723 17:07:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.723 17:07:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 17:07:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 90376724 kB' 'MemUsed: 7285960 kB' 'SwapCached: 0 kB' 'Active: 4710196 kB' 'Inactive: 326360 kB' 'Active(anon): 3996160 kB' 'Inactive(anon): 0 kB' 'Active(file): 714036 kB' 'Inactive(file): 326360 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4516228 kB' 'Mapped: 91768 kB' 'AnonPages: 523432 kB' 'Shmem: 3475832 kB' 'KernelStack: 12152 kB' 'PageTables: 5304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214128 kB' 'Slab: 508716 kB' 'SReclaimable: 214128 kB' 'SUnreclaim: 294588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.724 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.724 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.724 17:07:13 -- setup/common.sh@33 -- # echo 0 00:03:04.725 17:07:13 -- setup/common.sh@33 -- # return 0 00:03:04.725 17:07:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.725 17:07:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.725 17:07:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.725 17:07:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:04.725 17:07:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.725 17:07:13 -- setup/common.sh@18 -- # local node=1 00:03:04.725 17:07:13 -- setup/common.sh@19 -- # local var val 00:03:04.725 17:07:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.725 17:07:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.725 17:07:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:04.725 17:07:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:04.725 17:07:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.725 17:07:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 76512524 kB' 'MemUsed: 17205944 kB' 'SwapCached: 0 kB' 'Active: 10270172 kB' 'Inactive: 3391964 kB' 'Active(anon): 9741452 kB' 'Inactive(anon): 0 kB' 'Active(file): 528720 kB' 'Inactive(file): 3391964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13481496 kB' 'Mapped: 111796 kB' 'AnonPages: 180708 kB' 'Shmem: 9560812 kB' 'KernelStack: 8536 kB' 'PageTables: 4636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 291500 kB' 'Slab: 647596 kB' 'SReclaimable: 291500 kB' 'SUnreclaim: 356096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.725 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.725 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.726 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.726 17:07:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.726 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.726 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.726 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.726 17:07:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.726 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.726 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.726 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.726 17:07:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.726 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.726 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.726 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.726 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.726 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.726 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.726 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.726 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.726 17:07:13 -- setup/common.sh@32 -- # continue 00:03:04.726 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.726 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.726 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.726 17:07:13 -- setup/common.sh@33 -- # echo 0 00:03:04.726 17:07:13 -- setup/common.sh@33 -- # return 0 00:03:04.726 17:07:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.726 17:07:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.726 17:07:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.726 17:07:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:04.726 node0=512 expecting 513 00:03:04.726 17:07:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.726 17:07:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.726 17:07:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.726 17:07:13 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:04.726 node1=513 expecting 512 00:03:04.726 17:07:13 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:04.726 00:03:04.726 real 0m2.919s 00:03:04.726 user 0m1.191s 00:03:04.726 sys 0m1.789s 00:03:04.726 17:07:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:04.726 17:07:13 -- common/autotest_common.sh@10 -- # set +x 00:03:04.726 ************************************ 00:03:04.726 END TEST odd_alloc 00:03:04.726 ************************************ 00:03:04.726 17:07:13 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:04.726 17:07:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:04.726 17:07:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:04.726 17:07:13 -- common/autotest_common.sh@10 -- # set +x 00:03:04.726 ************************************ 00:03:04.726 START TEST custom_alloc 00:03:04.726 ************************************ 00:03:04.726 17:07:13 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:04.726 17:07:13 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:04.726 17:07:13 -- setup/hugepages.sh@169 -- # local node 00:03:04.726 17:07:13 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:04.726 17:07:13 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:04.726 17:07:13 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:04.726 17:07:13 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:04.726 17:07:13 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:04.726 17:07:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:04.726 17:07:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:04.726 17:07:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:04.726 17:07:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.726 17:07:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:04.726 17:07:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.726 17:07:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.726 17:07:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.726 17:07:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:04.726 17:07:13 -- setup/hugepages.sh@83 -- # : 256 00:03:04.726 17:07:13 -- setup/hugepages.sh@84 -- # : 1 00:03:04.726 17:07:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:04.726 17:07:13 -- setup/hugepages.sh@83 -- # : 0 00:03:04.726 17:07:13 -- setup/hugepages.sh@84 -- # : 0 00:03:04.726 17:07:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:04.726 17:07:13 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:04.726 17:07:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:04.726 17:07:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:04.726 17:07:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:04.726 17:07:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:04.726 17:07:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.726 17:07:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.726 17:07:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.726 17:07:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.726 17:07:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.726 17:07:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:04.726 17:07:13 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:04.726 17:07:13 -- setup/hugepages.sh@78 -- # return 0 00:03:04.726 17:07:13 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:04.726 17:07:13 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:04.726 17:07:13 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:04.726 17:07:13 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:04.726 17:07:13 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:04.726 17:07:13 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:04.726 17:07:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:04.726 17:07:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.726 17:07:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.726 17:07:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.726 17:07:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.726 17:07:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.726 17:07:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:04.726 17:07:13 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:04.726 17:07:13 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:04.726 17:07:13 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:04.726 17:07:13 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:04.726 17:07:13 -- setup/hugepages.sh@78 -- # return 0 00:03:04.726 17:07:13 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:04.726 17:07:13 -- setup/hugepages.sh@187 -- # setup output 00:03:04.726 17:07:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.726 17:07:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:07.302 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:07.302 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:07.302 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:07.302 17:07:16 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:07.302 17:07:16 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:07.302 17:07:16 -- setup/hugepages.sh@89 -- # local node 00:03:07.302 17:07:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.302 17:07:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.302 17:07:16 -- setup/hugepages.sh@92 -- # local surp 00:03:07.302 17:07:16 -- setup/hugepages.sh@93 -- # local resv 00:03:07.302 17:07:16 -- setup/hugepages.sh@94 -- # local anon 00:03:07.302 17:07:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.302 17:07:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.302 17:07:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.302 17:07:16 -- setup/common.sh@18 -- # local node= 00:03:07.302 17:07:16 -- setup/common.sh@19 -- # local var val 00:03:07.302 17:07:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.302 17:07:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.302 17:07:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.302 17:07:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.302 17:07:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.302 17:07:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.302 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.302 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 165844464 kB' 'MemAvailable: 169913908 kB' 'Buffers: 4124 kB' 'Cached: 17993672 kB' 'SwapCached: 0 kB' 'Active: 14982896 kB' 'Inactive: 3718324 kB' 'Active(anon): 13740140 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 706632 kB' 'Mapped: 204160 kB' 'Shmem: 13036716 kB' 'KReclaimable: 505628 kB' 'Slab: 1156856 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 651228 kB' 'KernelStack: 20640 kB' 'PageTables: 9756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 15267840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316676 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.303 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.303 17:07:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.303 17:07:16 -- setup/common.sh@33 -- # echo 0 00:03:07.304 17:07:16 -- setup/common.sh@33 -- # return 0 00:03:07.304 17:07:16 -- setup/hugepages.sh@97 -- # anon=0 00:03:07.304 17:07:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.304 17:07:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.304 17:07:16 -- setup/common.sh@18 -- # local node= 00:03:07.304 17:07:16 -- setup/common.sh@19 -- # local var val 00:03:07.304 17:07:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.304 17:07:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.304 17:07:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.304 17:07:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.304 17:07:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.304 17:07:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 165839004 kB' 'MemAvailable: 169908448 kB' 'Buffers: 4124 kB' 'Cached: 17993676 kB' 'SwapCached: 0 kB' 'Active: 14986740 kB' 'Inactive: 3718324 kB' 'Active(anon): 13743984 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 710532 kB' 'Mapped: 204364 kB' 'Shmem: 13036720 kB' 'KReclaimable: 505628 kB' 'Slab: 1156864 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 651236 kB' 'KernelStack: 20640 kB' 'PageTables: 9816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 15271696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316664 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.304 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.304 17:07:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.305 17:07:16 -- setup/common.sh@33 -- # echo 0 00:03:07.305 17:07:16 -- setup/common.sh@33 -- # return 0 00:03:07.305 17:07:16 -- setup/hugepages.sh@99 -- # surp=0 00:03:07.305 17:07:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.305 17:07:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.305 17:07:16 -- setup/common.sh@18 -- # local node= 00:03:07.305 17:07:16 -- setup/common.sh@19 -- # local var val 00:03:07.305 17:07:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.305 17:07:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.305 17:07:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.305 17:07:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.305 17:07:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.305 17:07:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 165839184 kB' 'MemAvailable: 169908628 kB' 'Buffers: 4124 kB' 'Cached: 17993688 kB' 'SwapCached: 0 kB' 'Active: 14981384 kB' 'Inactive: 3718324 kB' 'Active(anon): 13738628 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705100 kB' 'Mapped: 203860 kB' 'Shmem: 13036732 kB' 'KReclaimable: 505628 kB' 'Slab: 1156864 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 651236 kB' 'KernelStack: 20656 kB' 'PageTables: 9860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 15277564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316692 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.305 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.305 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.306 17:07:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.306 17:07:16 -- setup/common.sh@33 -- # echo 0 00:03:07.306 17:07:16 -- setup/common.sh@33 -- # return 0 00:03:07.306 17:07:16 -- setup/hugepages.sh@100 -- # resv=0 00:03:07.306 17:07:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:07.306 nr_hugepages=1536 00:03:07.306 17:07:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.306 resv_hugepages=0 00:03:07.306 17:07:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.306 surplus_hugepages=0 00:03:07.306 17:07:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.306 anon_hugepages=0 00:03:07.306 17:07:16 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:07.306 17:07:16 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:07.306 17:07:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.306 17:07:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.306 17:07:16 -- setup/common.sh@18 -- # local node= 00:03:07.306 17:07:16 -- setup/common.sh@19 -- # local var val 00:03:07.306 17:07:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.306 17:07:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.306 17:07:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.306 17:07:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.306 17:07:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.306 17:07:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.306 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 165839436 kB' 'MemAvailable: 169908880 kB' 'Buffers: 4124 kB' 'Cached: 17993708 kB' 'SwapCached: 0 kB' 'Active: 14981564 kB' 'Inactive: 3718324 kB' 'Active(anon): 13738808 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705304 kB' 'Mapped: 203596 kB' 'Shmem: 13036752 kB' 'KReclaimable: 505628 kB' 'Slab: 1156864 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 651236 kB' 'KernelStack: 20688 kB' 'PageTables: 9976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 15266108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316676 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.307 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.307 17:07:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.308 17:07:16 -- setup/common.sh@33 -- # echo 1536 00:03:07.308 17:07:16 -- setup/common.sh@33 -- # return 0 00:03:07.308 17:07:16 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:07.308 17:07:16 -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.308 17:07:16 -- setup/hugepages.sh@27 -- # local node 00:03:07.308 17:07:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.308 17:07:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:07.308 17:07:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.308 17:07:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:07.308 17:07:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.308 17:07:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.308 17:07:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.308 17:07:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.308 17:07:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.308 17:07:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.308 17:07:16 -- setup/common.sh@18 -- # local node=0 00:03:07.308 17:07:16 -- setup/common.sh@19 -- # local var val 00:03:07.308 17:07:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.308 17:07:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.308 17:07:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.308 17:07:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.308 17:07:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.308 17:07:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 90395752 kB' 'MemUsed: 7266932 kB' 'SwapCached: 0 kB' 'Active: 4709716 kB' 'Inactive: 326360 kB' 'Active(anon): 3995680 kB' 'Inactive(anon): 0 kB' 'Active(file): 714036 kB' 'Inactive(file): 326360 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4516268 kB' 'Mapped: 91776 kB' 'AnonPages: 522920 kB' 'Shmem: 3475872 kB' 'KernelStack: 12136 kB' 'PageTables: 5300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214128 kB' 'Slab: 509324 kB' 'SReclaimable: 214128 kB' 'SUnreclaim: 295196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.308 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.308 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@33 -- # echo 0 00:03:07.309 17:07:16 -- setup/common.sh@33 -- # return 0 00:03:07.309 17:07:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.309 17:07:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.309 17:07:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.309 17:07:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:07.309 17:07:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.309 17:07:16 -- setup/common.sh@18 -- # local node=1 00:03:07.309 17:07:16 -- setup/common.sh@19 -- # local var val 00:03:07.309 17:07:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.309 17:07:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.309 17:07:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:07.309 17:07:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:07.309 17:07:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.309 17:07:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 75444124 kB' 'MemUsed: 18274344 kB' 'SwapCached: 0 kB' 'Active: 10271728 kB' 'Inactive: 3391964 kB' 'Active(anon): 9743008 kB' 'Inactive(anon): 0 kB' 'Active(file): 528720 kB' 'Inactive(file): 3391964 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13481596 kB' 'Mapped: 111820 kB' 'AnonPages: 182212 kB' 'Shmem: 9560912 kB' 'KernelStack: 8536 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 291500 kB' 'Slab: 647540 kB' 'SReclaimable: 291500 kB' 'SUnreclaim: 356040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.309 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.309 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # continue 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.310 17:07:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.310 17:07:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.310 17:07:16 -- setup/common.sh@33 -- # echo 0 00:03:07.310 17:07:16 -- setup/common.sh@33 -- # return 0 00:03:07.310 17:07:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.310 17:07:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.310 17:07:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.310 17:07:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.310 17:07:16 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:07.310 node0=512 expecting 512 00:03:07.310 17:07:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.310 17:07:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.310 17:07:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.310 17:07:16 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:07.310 node1=1024 expecting 1024 00:03:07.310 17:07:16 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:07.310 00:03:07.310 real 0m2.682s 00:03:07.310 user 0m1.046s 00:03:07.310 sys 0m1.666s 00:03:07.310 17:07:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:07.310 17:07:16 -- common/autotest_common.sh@10 -- # set +x 00:03:07.310 ************************************ 00:03:07.310 END TEST custom_alloc 00:03:07.310 ************************************ 00:03:07.310 17:07:16 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:07.310 17:07:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:07.310 17:07:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:07.310 17:07:16 -- common/autotest_common.sh@10 -- # set +x 00:03:07.570 ************************************ 00:03:07.570 START TEST no_shrink_alloc 00:03:07.570 ************************************ 00:03:07.570 17:07:16 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:07.570 17:07:16 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:07.570 17:07:16 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:07.570 17:07:16 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:07.570 17:07:16 -- setup/hugepages.sh@51 -- # shift 00:03:07.570 17:07:16 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:07.570 17:07:16 -- setup/hugepages.sh@52 -- # local node_ids 00:03:07.570 17:07:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:07.570 17:07:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:07.570 17:07:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:07.570 17:07:16 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:07.570 17:07:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:07.570 17:07:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:07.570 17:07:16 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:07.570 17:07:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:07.570 17:07:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:07.570 17:07:16 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:07.570 17:07:16 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:07.570 17:07:16 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:07.570 17:07:16 -- setup/hugepages.sh@73 -- # return 0 00:03:07.570 17:07:16 -- setup/hugepages.sh@198 -- # setup output 00:03:07.570 17:07:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.570 17:07:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:10.109 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:10.109 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.109 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.109 17:07:19 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:10.109 17:07:19 -- setup/hugepages.sh@89 -- # local node 00:03:10.109 17:07:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.109 17:07:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.109 17:07:19 -- setup/hugepages.sh@92 -- # local surp 00:03:10.109 17:07:19 -- setup/hugepages.sh@93 -- # local resv 00:03:10.109 17:07:19 -- setup/hugepages.sh@94 -- # local anon 00:03:10.109 17:07:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.109 17:07:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.109 17:07:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.110 17:07:19 -- setup/common.sh@18 -- # local node= 00:03:10.110 17:07:19 -- setup/common.sh@19 -- # local var val 00:03:10.110 17:07:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.110 17:07:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.110 17:07:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.110 17:07:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.110 17:07:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.110 17:07:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166908684 kB' 'MemAvailable: 170978128 kB' 'Buffers: 4124 kB' 'Cached: 17993796 kB' 'SwapCached: 0 kB' 'Active: 14982660 kB' 'Inactive: 3718324 kB' 'Active(anon): 13739904 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705876 kB' 'Mapped: 203744 kB' 'Shmem: 13036840 kB' 'KReclaimable: 505628 kB' 'Slab: 1156068 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650440 kB' 'KernelStack: 20704 kB' 'PageTables: 9992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15266416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316724 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.110 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.110 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.111 17:07:19 -- setup/common.sh@33 -- # echo 0 00:03:10.111 17:07:19 -- setup/common.sh@33 -- # return 0 00:03:10.111 17:07:19 -- setup/hugepages.sh@97 -- # anon=0 00:03:10.111 17:07:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.111 17:07:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.111 17:07:19 -- setup/common.sh@18 -- # local node= 00:03:10.111 17:07:19 -- setup/common.sh@19 -- # local var val 00:03:10.111 17:07:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.111 17:07:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.111 17:07:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.111 17:07:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.111 17:07:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.111 17:07:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166908936 kB' 'MemAvailable: 170978380 kB' 'Buffers: 4124 kB' 'Cached: 17993800 kB' 'SwapCached: 0 kB' 'Active: 14981940 kB' 'Inactive: 3718324 kB' 'Active(anon): 13739184 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705628 kB' 'Mapped: 203620 kB' 'Shmem: 13036844 kB' 'KReclaimable: 505628 kB' 'Slab: 1156052 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650424 kB' 'KernelStack: 20704 kB' 'PageTables: 9992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15266428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316724 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.111 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.111 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.112 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.112 17:07:19 -- setup/common.sh@33 -- # echo 0 00:03:10.112 17:07:19 -- setup/common.sh@33 -- # return 0 00:03:10.112 17:07:19 -- setup/hugepages.sh@99 -- # surp=0 00:03:10.112 17:07:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.112 17:07:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.112 17:07:19 -- setup/common.sh@18 -- # local node= 00:03:10.112 17:07:19 -- setup/common.sh@19 -- # local var val 00:03:10.112 17:07:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.112 17:07:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.112 17:07:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.112 17:07:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.112 17:07:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.112 17:07:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.112 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166908180 kB' 'MemAvailable: 170977624 kB' 'Buffers: 4124 kB' 'Cached: 17993812 kB' 'SwapCached: 0 kB' 'Active: 14981948 kB' 'Inactive: 3718324 kB' 'Active(anon): 13739192 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705620 kB' 'Mapped: 203620 kB' 'Shmem: 13036856 kB' 'KReclaimable: 505628 kB' 'Slab: 1156052 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650424 kB' 'KernelStack: 20704 kB' 'PageTables: 9992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15266440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316724 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.113 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.113 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.114 17:07:19 -- setup/common.sh@33 -- # echo 0 00:03:10.114 17:07:19 -- setup/common.sh@33 -- # return 0 00:03:10.114 17:07:19 -- setup/hugepages.sh@100 -- # resv=0 00:03:10.114 17:07:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.114 nr_hugepages=1024 00:03:10.114 17:07:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.114 resv_hugepages=0 00:03:10.114 17:07:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.114 surplus_hugepages=0 00:03:10.114 17:07:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.114 anon_hugepages=0 00:03:10.114 17:07:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.114 17:07:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.114 17:07:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.114 17:07:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.114 17:07:19 -- setup/common.sh@18 -- # local node= 00:03:10.114 17:07:19 -- setup/common.sh@19 -- # local var val 00:03:10.114 17:07:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.114 17:07:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.114 17:07:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.114 17:07:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.114 17:07:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.114 17:07:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166907928 kB' 'MemAvailable: 170977372 kB' 'Buffers: 4124 kB' 'Cached: 17993824 kB' 'SwapCached: 0 kB' 'Active: 14982548 kB' 'Inactive: 3718324 kB' 'Active(anon): 13739792 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 706172 kB' 'Mapped: 203620 kB' 'Shmem: 13036868 kB' 'KReclaimable: 505628 kB' 'Slab: 1156052 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650424 kB' 'KernelStack: 20752 kB' 'PageTables: 10124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15266456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.114 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.114 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.115 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.115 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.116 17:07:19 -- setup/common.sh@33 -- # echo 1024 00:03:10.116 17:07:19 -- setup/common.sh@33 -- # return 0 00:03:10.116 17:07:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.116 17:07:19 -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.116 17:07:19 -- setup/hugepages.sh@27 -- # local node 00:03:10.116 17:07:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.116 17:07:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:10.116 17:07:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.116 17:07:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:10.116 17:07:19 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.116 17:07:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.116 17:07:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.116 17:07:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.116 17:07:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.116 17:07:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.116 17:07:19 -- setup/common.sh@18 -- # local node=0 00:03:10.116 17:07:19 -- setup/common.sh@19 -- # local var val 00:03:10.116 17:07:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.116 17:07:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.116 17:07:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.116 17:07:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.116 17:07:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.116 17:07:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 89347772 kB' 'MemUsed: 8314912 kB' 'SwapCached: 0 kB' 'Active: 4710160 kB' 'Inactive: 326360 kB' 'Active(anon): 3996124 kB' 'Inactive(anon): 0 kB' 'Active(file): 714036 kB' 'Inactive(file): 326360 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4516280 kB' 'Mapped: 91772 kB' 'AnonPages: 523460 kB' 'Shmem: 3475884 kB' 'KernelStack: 12184 kB' 'PageTables: 5472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214128 kB' 'Slab: 508864 kB' 'SReclaimable: 214128 kB' 'SUnreclaim: 294736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.116 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.116 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # continue 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.117 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.117 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.117 17:07:19 -- setup/common.sh@33 -- # echo 0 00:03:10.117 17:07:19 -- setup/common.sh@33 -- # return 0 00:03:10.117 17:07:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.117 17:07:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.117 17:07:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.117 17:07:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.117 17:07:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:10.117 node0=1024 expecting 1024 00:03:10.117 17:07:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:10.117 17:07:19 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:10.117 17:07:19 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:10.117 17:07:19 -- setup/hugepages.sh@202 -- # setup output 00:03:10.117 17:07:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.117 17:07:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:12.659 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:12.659 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.659 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.659 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:12.921 17:07:21 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:12.922 17:07:21 -- setup/hugepages.sh@89 -- # local node 00:03:12.922 17:07:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.922 17:07:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.922 17:07:21 -- setup/hugepages.sh@92 -- # local surp 00:03:12.922 17:07:21 -- setup/hugepages.sh@93 -- # local resv 00:03:12.922 17:07:21 -- setup/hugepages.sh@94 -- # local anon 00:03:12.922 17:07:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.922 17:07:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.922 17:07:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.922 17:07:21 -- setup/common.sh@18 -- # local node= 00:03:12.922 17:07:21 -- setup/common.sh@19 -- # local var val 00:03:12.922 17:07:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.922 17:07:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.922 17:07:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.922 17:07:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.922 17:07:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.922 17:07:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166875816 kB' 'MemAvailable: 170945260 kB' 'Buffers: 4124 kB' 'Cached: 17993896 kB' 'SwapCached: 0 kB' 'Active: 14982032 kB' 'Inactive: 3718324 kB' 'Active(anon): 13739276 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705544 kB' 'Mapped: 203676 kB' 'Shmem: 13036940 kB' 'KReclaimable: 505628 kB' 'Slab: 1156164 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650536 kB' 'KernelStack: 20672 kB' 'PageTables: 9884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15267200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316740 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.922 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.922 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.923 17:07:21 -- setup/common.sh@33 -- # echo 0 00:03:12.923 17:07:21 -- setup/common.sh@33 -- # return 0 00:03:12.923 17:07:21 -- setup/hugepages.sh@97 -- # anon=0 00:03:12.923 17:07:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.923 17:07:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.923 17:07:21 -- setup/common.sh@18 -- # local node= 00:03:12.923 17:07:21 -- setup/common.sh@19 -- # local var val 00:03:12.923 17:07:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.923 17:07:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.923 17:07:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.923 17:07:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.923 17:07:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.923 17:07:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166882224 kB' 'MemAvailable: 170951668 kB' 'Buffers: 4124 kB' 'Cached: 17993900 kB' 'SwapCached: 0 kB' 'Active: 14981976 kB' 'Inactive: 3718324 kB' 'Active(anon): 13739220 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705580 kB' 'Mapped: 203624 kB' 'Shmem: 13036944 kB' 'KReclaimable: 505628 kB' 'Slab: 1156216 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650588 kB' 'KernelStack: 20704 kB' 'PageTables: 9976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15267212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.923 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.923 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.924 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.924 17:07:22 -- setup/common.sh@33 -- # echo 0 00:03:12.924 17:07:22 -- setup/common.sh@33 -- # return 0 00:03:12.924 17:07:22 -- setup/hugepages.sh@99 -- # surp=0 00:03:12.924 17:07:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.924 17:07:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.924 17:07:22 -- setup/common.sh@18 -- # local node= 00:03:12.924 17:07:22 -- setup/common.sh@19 -- # local var val 00:03:12.924 17:07:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.924 17:07:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.924 17:07:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.924 17:07:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.924 17:07:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.924 17:07:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.924 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.924 17:07:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166882872 kB' 'MemAvailable: 170952316 kB' 'Buffers: 4124 kB' 'Cached: 17993912 kB' 'SwapCached: 0 kB' 'Active: 14981992 kB' 'Inactive: 3718324 kB' 'Active(anon): 13739236 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705580 kB' 'Mapped: 203624 kB' 'Shmem: 13036956 kB' 'KReclaimable: 505628 kB' 'Slab: 1156216 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650588 kB' 'KernelStack: 20704 kB' 'PageTables: 9976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15267224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:12.924 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.924 17:07:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.924 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.925 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.925 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.926 17:07:22 -- setup/common.sh@33 -- # echo 0 00:03:12.926 17:07:22 -- setup/common.sh@33 -- # return 0 00:03:12.926 17:07:22 -- setup/hugepages.sh@100 -- # resv=0 00:03:12.926 17:07:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:12.926 nr_hugepages=1024 00:03:12.926 17:07:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.926 resv_hugepages=0 00:03:12.926 17:07:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.926 surplus_hugepages=0 00:03:12.926 17:07:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.926 anon_hugepages=0 00:03:12.926 17:07:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.926 17:07:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:12.926 17:07:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.926 17:07:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.926 17:07:22 -- setup/common.sh@18 -- # local node= 00:03:12.926 17:07:22 -- setup/common.sh@19 -- # local var val 00:03:12.926 17:07:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.926 17:07:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.926 17:07:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.926 17:07:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.926 17:07:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.926 17:07:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 166882872 kB' 'MemAvailable: 170952316 kB' 'Buffers: 4124 kB' 'Cached: 17993928 kB' 'SwapCached: 0 kB' 'Active: 14981920 kB' 'Inactive: 3718324 kB' 'Active(anon): 13739164 kB' 'Inactive(anon): 0 kB' 'Active(file): 1242756 kB' 'Inactive(file): 3718324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705456 kB' 'Mapped: 203624 kB' 'Shmem: 13036972 kB' 'KReclaimable: 505628 kB' 'Slab: 1156216 kB' 'SReclaimable: 505628 kB' 'SUnreclaim: 650588 kB' 'KernelStack: 20688 kB' 'PageTables: 9920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 15267240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94464 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3767252 kB' 'DirectMap2M: 42049536 kB' 'DirectMap1G: 156237824 kB' 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.926 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.926 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.927 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.927 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.928 17:07:22 -- setup/common.sh@33 -- # echo 1024 00:03:12.928 17:07:22 -- setup/common.sh@33 -- # return 0 00:03:12.928 17:07:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.928 17:07:22 -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.928 17:07:22 -- setup/hugepages.sh@27 -- # local node 00:03:12.928 17:07:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.928 17:07:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:12.928 17:07:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.928 17:07:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:12.928 17:07:22 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.928 17:07:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.928 17:07:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.928 17:07:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.928 17:07:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.928 17:07:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.928 17:07:22 -- setup/common.sh@18 -- # local node=0 00:03:12.928 17:07:22 -- setup/common.sh@19 -- # local var val 00:03:12.928 17:07:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.928 17:07:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.928 17:07:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.928 17:07:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.928 17:07:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.928 17:07:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 89354276 kB' 'MemUsed: 8308408 kB' 'SwapCached: 0 kB' 'Active: 4711496 kB' 'Inactive: 326360 kB' 'Active(anon): 3997460 kB' 'Inactive(anon): 0 kB' 'Active(file): 714036 kB' 'Inactive(file): 326360 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4516304 kB' 'Mapped: 91772 kB' 'AnonPages: 524760 kB' 'Shmem: 3475908 kB' 'KernelStack: 12184 kB' 'PageTables: 5404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214128 kB' 'Slab: 508920 kB' 'SReclaimable: 214128 kB' 'SUnreclaim: 294792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.928 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.928 17:07:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # continue 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.929 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.929 17:07:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.929 17:07:22 -- setup/common.sh@33 -- # echo 0 00:03:12.929 17:07:22 -- setup/common.sh@33 -- # return 0 00:03:12.929 17:07:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.929 17:07:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.929 17:07:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.929 17:07:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.929 17:07:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:12.929 node0=1024 expecting 1024 00:03:12.929 17:07:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:12.929 00:03:12.929 real 0m5.508s 00:03:12.929 user 0m2.142s 00:03:12.929 sys 0m3.442s 00:03:12.929 17:07:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:12.929 17:07:22 -- common/autotest_common.sh@10 -- # set +x 00:03:12.929 ************************************ 00:03:12.929 END TEST no_shrink_alloc 00:03:12.929 ************************************ 00:03:12.929 17:07:22 -- setup/hugepages.sh@217 -- # clear_hp 00:03:12.929 17:07:22 -- setup/hugepages.sh@37 -- # local node hp 00:03:12.929 17:07:22 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.929 17:07:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.929 17:07:22 -- setup/hugepages.sh@41 -- # echo 0 00:03:12.929 17:07:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.929 17:07:22 -- setup/hugepages.sh@41 -- # echo 0 00:03:12.929 17:07:22 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.929 17:07:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.929 17:07:22 -- setup/hugepages.sh@41 -- # echo 0 00:03:12.929 17:07:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.929 17:07:22 -- setup/hugepages.sh@41 -- # echo 0 00:03:12.929 17:07:22 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:12.929 17:07:22 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:12.929 00:03:12.929 real 0m21.925s 00:03:12.929 user 0m8.172s 00:03:12.929 sys 0m12.537s 00:03:12.929 17:07:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:12.929 17:07:22 -- common/autotest_common.sh@10 -- # set +x 00:03:12.929 ************************************ 00:03:12.929 END TEST hugepages 00:03:12.929 ************************************ 00:03:13.188 17:07:22 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:13.188 17:07:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:13.188 17:07:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:13.188 17:07:22 -- common/autotest_common.sh@10 -- # set +x 00:03:13.188 ************************************ 00:03:13.188 START TEST driver 00:03:13.188 ************************************ 00:03:13.188 17:07:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:13.188 * Looking for test storage... 00:03:13.188 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:13.188 17:07:22 -- setup/driver.sh@68 -- # setup reset 00:03:13.188 17:07:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.188 17:07:22 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.375 17:07:26 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:17.375 17:07:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.375 17:07:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.375 17:07:26 -- common/autotest_common.sh@10 -- # set +x 00:03:17.375 ************************************ 00:03:17.375 START TEST guess_driver 00:03:17.375 ************************************ 00:03:17.375 17:07:26 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:17.375 17:07:26 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:17.375 17:07:26 -- setup/driver.sh@47 -- # local fail=0 00:03:17.375 17:07:26 -- setup/driver.sh@49 -- # pick_driver 00:03:17.375 17:07:26 -- setup/driver.sh@36 -- # vfio 00:03:17.375 17:07:26 -- setup/driver.sh@21 -- # local iommu_grups 00:03:17.375 17:07:26 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:17.375 17:07:26 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:17.375 17:07:26 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:17.375 17:07:26 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:17.375 17:07:26 -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:17.375 17:07:26 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:17.375 17:07:26 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:17.375 17:07:26 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:17.375 17:07:26 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:17.375 17:07:26 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:17.375 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:17.375 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:17.375 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:17.375 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:17.375 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:17.375 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:17.375 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:17.375 17:07:26 -- setup/driver.sh@30 -- # return 0 00:03:17.375 17:07:26 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:17.375 17:07:26 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:17.375 17:07:26 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:17.375 17:07:26 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:17.375 Looking for driver=vfio-pci 00:03:17.375 17:07:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.375 17:07:26 -- setup/driver.sh@45 -- # setup output config 00:03:17.375 17:07:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.375 17:07:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:19.903 17:07:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.903 17:07:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.903 17:07:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.903 17:07:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.805 17:07:30 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.805 17:07:30 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.805 17:07:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.805 17:07:30 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:21.805 17:07:30 -- setup/driver.sh@65 -- # setup reset 00:03:21.805 17:07:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:21.805 17:07:30 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.007 00:03:26.007 real 0m8.100s 00:03:26.007 user 0m2.145s 00:03:26.007 sys 0m3.811s 00:03:26.007 17:07:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:26.007 17:07:34 -- common/autotest_common.sh@10 -- # set +x 00:03:26.007 ************************************ 00:03:26.007 END TEST guess_driver 00:03:26.007 ************************************ 00:03:26.007 00:03:26.007 real 0m12.208s 00:03:26.007 user 0m3.279s 00:03:26.007 sys 0m5.961s 00:03:26.007 17:07:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:26.007 17:07:34 -- common/autotest_common.sh@10 -- # set +x 00:03:26.007 ************************************ 00:03:26.007 END TEST driver 00:03:26.007 ************************************ 00:03:26.007 17:07:34 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:03:26.007 17:07:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.007 17:07:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.007 17:07:34 -- common/autotest_common.sh@10 -- # set +x 00:03:26.007 ************************************ 00:03:26.007 START TEST devices 00:03:26.007 ************************************ 00:03:26.007 17:07:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:03:26.007 * Looking for test storage... 00:03:26.007 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:26.007 17:07:34 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:26.007 17:07:34 -- setup/devices.sh@192 -- # setup reset 00:03:26.007 17:07:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.007 17:07:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.535 17:07:37 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:28.535 17:07:37 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:28.535 17:07:37 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:28.535 17:07:37 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:28.535 17:07:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:28.535 17:07:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:28.535 17:07:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:28.535 17:07:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:28.535 17:07:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:28.535 17:07:37 -- setup/devices.sh@196 -- # blocks=() 00:03:28.535 17:07:37 -- setup/devices.sh@196 -- # declare -a blocks 00:03:28.535 17:07:37 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:28.535 17:07:37 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:28.535 17:07:37 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:28.535 17:07:37 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:28.535 17:07:37 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:28.535 17:07:37 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:28.535 17:07:37 -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:03:28.535 17:07:37 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:28.535 17:07:37 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:28.535 17:07:37 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:28.535 17:07:37 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:28.535 No valid GPT data, bailing 00:03:28.535 17:07:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:28.535 17:07:37 -- scripts/common.sh@391 -- # pt= 00:03:28.535 17:07:37 -- scripts/common.sh@392 -- # return 1 00:03:28.535 17:07:37 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:28.535 17:07:37 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:28.535 17:07:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:28.535 17:07:37 -- setup/common.sh@80 -- # echo 1600321314816 00:03:28.535 17:07:37 -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:28.535 17:07:37 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:28.535 17:07:37 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5f:00.0 00:03:28.535 17:07:37 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:28.535 17:07:37 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:28.535 17:07:37 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:28.535 17:07:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.535 17:07:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.535 17:07:37 -- common/autotest_common.sh@10 -- # set +x 00:03:28.794 ************************************ 00:03:28.794 START TEST nvme_mount 00:03:28.794 ************************************ 00:03:28.794 17:07:37 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:28.794 17:07:37 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:28.794 17:07:37 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:28.794 17:07:37 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.794 17:07:37 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.794 17:07:37 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:28.794 17:07:37 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:28.794 17:07:37 -- setup/common.sh@40 -- # local part_no=1 00:03:28.794 17:07:37 -- setup/common.sh@41 -- # local size=1073741824 00:03:28.794 17:07:37 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:28.794 17:07:37 -- setup/common.sh@44 -- # parts=() 00:03:28.794 17:07:37 -- setup/common.sh@44 -- # local parts 00:03:28.794 17:07:37 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:28.794 17:07:37 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:28.794 17:07:37 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:28.794 17:07:37 -- setup/common.sh@46 -- # (( part++ )) 00:03:28.794 17:07:37 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:28.794 17:07:37 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:28.794 17:07:37 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:28.794 17:07:37 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:29.729 Creating new GPT entries in memory. 00:03:29.729 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:29.729 other utilities. 00:03:29.729 17:07:38 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:29.729 17:07:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:29.729 17:07:38 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:29.729 17:07:38 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:29.729 17:07:38 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:31.103 Creating new GPT entries in memory. 00:03:31.103 The operation has completed successfully. 00:03:31.103 17:07:39 -- setup/common.sh@57 -- # (( part++ )) 00:03:31.103 17:07:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:31.103 17:07:39 -- setup/common.sh@62 -- # wait 2905209 00:03:31.103 17:07:39 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.103 17:07:39 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:31.103 17:07:39 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.103 17:07:39 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:31.103 17:07:39 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:31.103 17:07:39 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.103 17:07:39 -- setup/devices.sh@105 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:31.103 17:07:39 -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:31.103 17:07:39 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:31.103 17:07:39 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.103 17:07:39 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:31.103 17:07:39 -- setup/devices.sh@53 -- # local found=0 00:03:31.103 17:07:39 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:31.103 17:07:39 -- setup/devices.sh@56 -- # : 00:03:31.103 17:07:39 -- setup/devices.sh@59 -- # local pci status 00:03:31.103 17:07:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.103 17:07:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:31.103 17:07:39 -- setup/devices.sh@47 -- # setup output config 00:03:31.103 17:07:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.103 17:07:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:33.634 17:07:42 -- setup/devices.sh@63 -- # found=1 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.634 17:07:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:33.634 17:07:42 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:33.634 17:07:42 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.634 17:07:42 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.634 17:07:42 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.634 17:07:42 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:33.634 17:07:42 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.634 17:07:42 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.634 17:07:42 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:33.634 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:33.634 17:07:42 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:33.634 17:07:42 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:33.893 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:33.893 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:33.893 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:33.893 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:33.893 17:07:42 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:33.893 17:07:42 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:33.893 17:07:42 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.893 17:07:42 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:33.893 17:07:42 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:33.893 17:07:42 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.893 17:07:42 -- setup/devices.sh@116 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.893 17:07:42 -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:33.893 17:07:42 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:33.893 17:07:42 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.893 17:07:42 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.893 17:07:42 -- setup/devices.sh@53 -- # local found=0 00:03:33.893 17:07:42 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.893 17:07:42 -- setup/devices.sh@56 -- # : 00:03:33.893 17:07:42 -- setup/devices.sh@59 -- # local pci status 00:03:33.893 17:07:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.893 17:07:43 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:33.893 17:07:43 -- setup/devices.sh@47 -- # setup output config 00:03:33.893 17:07:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.893 17:07:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:36.426 17:07:45 -- setup/devices.sh@63 -- # found=1 00:03:36.426 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.426 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.426 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.426 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.426 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.426 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.426 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.426 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.426 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.426 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.426 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.426 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.426 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.426 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.426 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.427 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.427 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.427 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.427 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.427 17:07:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:36.427 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.686 17:07:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:36.686 17:07:45 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:36.686 17:07:45 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.686 17:07:45 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.686 17:07:45 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.686 17:07:45 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.686 17:07:45 -- setup/devices.sh@125 -- # verify 0000:5f:00.0 data@nvme0n1 '' '' 00:03:36.686 17:07:45 -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:36.686 17:07:45 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:36.686 17:07:45 -- setup/devices.sh@50 -- # local mount_point= 00:03:36.686 17:07:45 -- setup/devices.sh@51 -- # local test_file= 00:03:36.686 17:07:45 -- setup/devices.sh@53 -- # local found=0 00:03:36.686 17:07:45 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:36.686 17:07:45 -- setup/devices.sh@59 -- # local pci status 00:03:36.686 17:07:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.686 17:07:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:36.686 17:07:45 -- setup/devices.sh@47 -- # setup output config 00:03:36.686 17:07:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.686 17:07:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:39.219 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.219 17:07:48 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:39.219 17:07:48 -- setup/devices.sh@63 -- # found=1 00:03:39.219 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.219 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.219 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.219 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.219 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.219 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.219 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.220 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.220 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.220 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.220 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.220 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.220 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.220 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.220 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.220 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.220 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.220 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.220 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.220 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.220 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.220 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.220 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.220 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.220 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.220 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.220 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.220 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.220 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.220 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.220 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.220 17:07:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:39.220 17:07:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.479 17:07:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:39.479 17:07:48 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:39.479 17:07:48 -- setup/devices.sh@68 -- # return 0 00:03:39.479 17:07:48 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:39.479 17:07:48 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.479 17:07:48 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:39.479 17:07:48 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:39.479 17:07:48 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:39.479 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:39.479 00:03:39.479 real 0m10.650s 00:03:39.479 user 0m3.037s 00:03:39.479 sys 0m5.381s 00:03:39.479 17:07:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:39.479 17:07:48 -- common/autotest_common.sh@10 -- # set +x 00:03:39.479 ************************************ 00:03:39.479 END TEST nvme_mount 00:03:39.479 ************************************ 00:03:39.479 17:07:48 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:39.479 17:07:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:39.479 17:07:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:39.479 17:07:48 -- common/autotest_common.sh@10 -- # set +x 00:03:39.479 ************************************ 00:03:39.479 START TEST dm_mount 00:03:39.479 ************************************ 00:03:39.479 17:07:48 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:39.479 17:07:48 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:39.479 17:07:48 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:39.479 17:07:48 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:39.479 17:07:48 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:39.479 17:07:48 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:39.479 17:07:48 -- setup/common.sh@40 -- # local part_no=2 00:03:39.479 17:07:48 -- setup/common.sh@41 -- # local size=1073741824 00:03:39.479 17:07:48 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:39.479 17:07:48 -- setup/common.sh@44 -- # parts=() 00:03:39.479 17:07:48 -- setup/common.sh@44 -- # local parts 00:03:39.479 17:07:48 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:39.479 17:07:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:39.479 17:07:48 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:39.479 17:07:48 -- setup/common.sh@46 -- # (( part++ )) 00:03:39.479 17:07:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:39.479 17:07:48 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:39.479 17:07:48 -- setup/common.sh@46 -- # (( part++ )) 00:03:39.479 17:07:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:39.479 17:07:48 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:39.479 17:07:48 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:39.479 17:07:48 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:40.857 Creating new GPT entries in memory. 00:03:40.857 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:40.857 other utilities. 00:03:40.857 17:07:49 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:40.857 17:07:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.857 17:07:49 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:40.857 17:07:49 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:40.857 17:07:49 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:41.793 Creating new GPT entries in memory. 00:03:41.793 The operation has completed successfully. 00:03:41.793 17:07:50 -- setup/common.sh@57 -- # (( part++ )) 00:03:41.793 17:07:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.793 17:07:50 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:41.793 17:07:50 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:41.793 17:07:50 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:42.748 The operation has completed successfully. 00:03:42.748 17:07:51 -- setup/common.sh@57 -- # (( part++ )) 00:03:42.748 17:07:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.748 17:07:51 -- setup/common.sh@62 -- # wait 2909400 00:03:42.748 17:07:51 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:42.748 17:07:51 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:42.748 17:07:51 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:42.748 17:07:51 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:42.748 17:07:51 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:42.748 17:07:51 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:42.748 17:07:51 -- setup/devices.sh@161 -- # break 00:03:42.748 17:07:51 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:42.748 17:07:51 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:42.748 17:07:51 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:42.748 17:07:51 -- setup/devices.sh@166 -- # dm=dm-2 00:03:42.748 17:07:51 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:42.748 17:07:51 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:42.748 17:07:51 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:42.748 17:07:51 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:03:42.748 17:07:51 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:42.748 17:07:51 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:42.748 17:07:51 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:42.748 17:07:51 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:42.748 17:07:51 -- setup/devices.sh@174 -- # verify 0000:5f:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:42.748 17:07:51 -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:42.748 17:07:51 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:42.748 17:07:51 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:42.748 17:07:51 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:42.748 17:07:51 -- setup/devices.sh@53 -- # local found=0 00:03:42.748 17:07:51 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:42.748 17:07:51 -- setup/devices.sh@56 -- # : 00:03:42.748 17:07:51 -- setup/devices.sh@59 -- # local pci status 00:03:42.748 17:07:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.748 17:07:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:42.748 17:07:51 -- setup/devices.sh@47 -- # setup output config 00:03:42.748 17:07:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.748 17:07:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:45.297 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.297 17:07:54 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:45.297 17:07:54 -- setup/devices.sh@63 -- # found=1 00:03:45.297 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.297 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.297 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.297 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.297 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.298 17:07:54 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:45.298 17:07:54 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:45.298 17:07:54 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:45.298 17:07:54 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:45.298 17:07:54 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:45.298 17:07:54 -- setup/devices.sh@184 -- # verify 0000:5f:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:03:45.298 17:07:54 -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:45.298 17:07:54 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:03:45.298 17:07:54 -- setup/devices.sh@50 -- # local mount_point= 00:03:45.298 17:07:54 -- setup/devices.sh@51 -- # local test_file= 00:03:45.298 17:07:54 -- setup/devices.sh@53 -- # local found=0 00:03:45.298 17:07:54 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:45.298 17:07:54 -- setup/devices.sh@59 -- # local pci status 00:03:45.298 17:07:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.298 17:07:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:45.298 17:07:54 -- setup/devices.sh@47 -- # setup output config 00:03:45.298 17:07:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.298 17:07:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:03:47.831 17:07:56 -- setup/devices.sh@63 -- # found=1 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.831 17:07:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:03:47.831 17:07:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.090 17:07:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:48.090 17:07:57 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:48.090 17:07:57 -- setup/devices.sh@68 -- # return 0 00:03:48.090 17:07:57 -- setup/devices.sh@187 -- # cleanup_dm 00:03:48.090 17:07:57 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:48.090 17:07:57 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:48.090 17:07:57 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:48.090 17:07:57 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:48.090 17:07:57 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:48.090 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:48.090 17:07:57 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:48.090 17:07:57 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:48.090 00:03:48.090 real 0m8.469s 00:03:48.090 user 0m2.026s 00:03:48.090 sys 0m3.446s 00:03:48.090 17:07:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:48.090 17:07:57 -- common/autotest_common.sh@10 -- # set +x 00:03:48.090 ************************************ 00:03:48.090 END TEST dm_mount 00:03:48.090 ************************************ 00:03:48.090 17:07:57 -- setup/devices.sh@1 -- # cleanup 00:03:48.090 17:07:57 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:48.090 17:07:57 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.090 17:07:57 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:48.090 17:07:57 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:48.090 17:07:57 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:48.090 17:07:57 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:48.349 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:48.350 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:48.350 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:48.350 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:48.350 17:07:57 -- setup/devices.sh@12 -- # cleanup_dm 00:03:48.350 17:07:57 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:48.350 17:07:57 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:48.350 17:07:57 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:48.350 17:07:57 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:48.350 17:07:57 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:48.350 17:07:57 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:48.350 00:03:48.350 real 0m22.801s 00:03:48.350 user 0m6.284s 00:03:48.350 sys 0m11.108s 00:03:48.350 17:07:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:48.350 17:07:57 -- common/autotest_common.sh@10 -- # set +x 00:03:48.350 ************************************ 00:03:48.350 END TEST devices 00:03:48.350 ************************************ 00:03:48.350 00:03:48.350 real 1m18.109s 00:03:48.350 user 0m24.711s 00:03:48.350 sys 0m41.770s 00:03:48.350 17:07:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:48.350 17:07:57 -- common/autotest_common.sh@10 -- # set +x 00:03:48.350 ************************************ 00:03:48.350 END TEST setup.sh 00:03:48.350 ************************************ 00:03:48.350 17:07:57 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:50.883 Hugepages 00:03:50.883 node hugesize free / total 00:03:50.883 node0 1048576kB 0 / 0 00:03:50.883 node0 2048kB 2048 / 2048 00:03:50.883 node1 1048576kB 0 / 0 00:03:50.883 node1 2048kB 0 / 0 00:03:50.883 00:03:50.883 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:50.883 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:50.883 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:50.883 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:50.883 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:50.883 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:50.883 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:50.883 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:50.883 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:50.883 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:50.883 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:50.883 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:50.883 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:51.142 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:51.142 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:51.142 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:51.142 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:51.142 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:51.142 17:08:00 -- spdk/autotest.sh@130 -- # uname -s 00:03:51.142 17:08:00 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:51.142 17:08:00 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:51.142 17:08:00 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:53.681 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:53.681 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:55.587 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:55.587 17:08:04 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:56.523 17:08:05 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:56.523 17:08:05 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:56.523 17:08:05 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:56.523 17:08:05 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:56.523 17:08:05 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:56.523 17:08:05 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:56.523 17:08:05 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:56.523 17:08:05 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:56.523 17:08:05 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:56.523 17:08:05 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:56.523 17:08:05 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:5f:00.0 00:03:56.523 17:08:05 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.061 Waiting for block devices as requested 00:03:59.061 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:03:59.320 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:59.320 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:59.320 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:59.579 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:59.579 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:59.579 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:59.579 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:59.838 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:59.838 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:59.838 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:59.838 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:00.097 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:00.097 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:00.097 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:00.357 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:00.357 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:00.357 17:08:09 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:00.357 17:08:09 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:04:00.357 17:08:09 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:00.357 17:08:09 -- common/autotest_common.sh@1488 -- # grep 0000:5f:00.0/nvme/nvme 00:04:00.357 17:08:09 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:00.357 17:08:09 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:04:00.357 17:08:09 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:00.357 17:08:09 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:00.357 17:08:09 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:00.357 17:08:09 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:00.357 17:08:09 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:00.357 17:08:09 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:00.357 17:08:09 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:00.357 17:08:09 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:00.357 17:08:09 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:00.357 17:08:09 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:00.357 17:08:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:00.357 17:08:09 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:00.357 17:08:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:00.357 17:08:09 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:00.357 17:08:09 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:00.357 17:08:09 -- common/autotest_common.sh@1543 -- # continue 00:04:00.357 17:08:09 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:00.357 17:08:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:00.357 17:08:09 -- common/autotest_common.sh@10 -- # set +x 00:04:00.357 17:08:09 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:00.357 17:08:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:00.357 17:08:09 -- common/autotest_common.sh@10 -- # set +x 00:04:00.357 17:08:09 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:03.644 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:03.644 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:05.019 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:05.019 17:08:13 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:05.019 17:08:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:05.019 17:08:13 -- common/autotest_common.sh@10 -- # set +x 00:04:05.019 17:08:14 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:05.019 17:08:14 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:05.019 17:08:14 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:05.019 17:08:14 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:05.019 17:08:14 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:05.019 17:08:14 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:05.019 17:08:14 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:05.019 17:08:14 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:05.019 17:08:14 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.019 17:08:14 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:05.019 17:08:14 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:05.019 17:08:14 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:05.019 17:08:14 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:5f:00.0 00:04:05.019 17:08:14 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:05.019 17:08:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:04:05.019 17:08:14 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:05.019 17:08:14 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:05.019 17:08:14 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:05.019 17:08:14 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:5f:00.0 00:04:05.019 17:08:14 -- common/autotest_common.sh@1578 -- # [[ -z 0000:5f:00.0 ]] 00:04:05.019 17:08:14 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=2918198 00:04:05.019 17:08:14 -- common/autotest_common.sh@1584 -- # waitforlisten 2918198 00:04:05.019 17:08:14 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.019 17:08:14 -- common/autotest_common.sh@817 -- # '[' -z 2918198 ']' 00:04:05.019 17:08:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.019 17:08:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:05.019 17:08:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.019 17:08:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:05.019 17:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:05.019 [2024-04-24 17:08:14.158424] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:04:05.019 [2024-04-24 17:08:14.158470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2918198 ] 00:04:05.019 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.019 [2024-04-24 17:08:14.213806] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.277 [2024-04-24 17:08:14.285458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.843 17:08:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:05.843 17:08:14 -- common/autotest_common.sh@850 -- # return 0 00:04:05.843 17:08:14 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:04:05.843 17:08:14 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:04:05.843 17:08:14 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5f:00.0 00:04:09.128 nvme0n1 00:04:09.128 17:08:17 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:09.128 [2024-04-24 17:08:18.084627] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:09.128 request: 00:04:09.128 { 00:04:09.128 "nvme_ctrlr_name": "nvme0", 00:04:09.128 "password": "test", 00:04:09.128 "method": "bdev_nvme_opal_revert", 00:04:09.128 "req_id": 1 00:04:09.128 } 00:04:09.128 Got JSON-RPC error response 00:04:09.128 response: 00:04:09.128 { 00:04:09.128 "code": -32602, 00:04:09.128 "message": "Invalid parameters" 00:04:09.128 } 00:04:09.128 17:08:18 -- common/autotest_common.sh@1590 -- # true 00:04:09.128 17:08:18 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:04:09.128 17:08:18 -- common/autotest_common.sh@1594 -- # killprocess 2918198 00:04:09.128 17:08:18 -- common/autotest_common.sh@936 -- # '[' -z 2918198 ']' 00:04:09.128 17:08:18 -- common/autotest_common.sh@940 -- # kill -0 2918198 00:04:09.128 17:08:18 -- common/autotest_common.sh@941 -- # uname 00:04:09.128 17:08:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:09.128 17:08:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2918198 00:04:09.128 17:08:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:09.128 17:08:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:09.128 17:08:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2918198' 00:04:09.128 killing process with pid 2918198 00:04:09.128 17:08:18 -- common/autotest_common.sh@955 -- # kill 2918198 00:04:09.128 17:08:18 -- common/autotest_common.sh@960 -- # wait 2918198 00:04:11.660 17:08:20 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:11.660 17:08:20 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:11.660 17:08:20 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:11.660 17:08:20 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:11.660 17:08:20 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:11.660 17:08:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:11.660 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:11.660 17:08:20 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:11.660 17:08:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.660 17:08:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.660 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:11.660 ************************************ 00:04:11.660 START TEST env 00:04:11.660 ************************************ 00:04:11.660 17:08:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:11.660 * Looking for test storage... 00:04:11.660 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:11.660 17:08:20 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:11.660 17:08:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.660 17:08:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.660 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:11.660 ************************************ 00:04:11.660 START TEST env_memory 00:04:11.660 ************************************ 00:04:11.660 17:08:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:11.660 00:04:11.660 00:04:11.660 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.660 http://cunit.sourceforge.net/ 00:04:11.660 00:04:11.660 00:04:11.660 Suite: memory 00:04:11.660 Test: alloc and free memory map ...[2024-04-24 17:08:20.735777] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:11.661 passed 00:04:11.661 Test: mem map translation ...[2024-04-24 17:08:20.754667] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:11.661 [2024-04-24 17:08:20.754680] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:11.661 [2024-04-24 17:08:20.754716] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:11.661 [2024-04-24 17:08:20.754722] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:11.661 passed 00:04:11.661 Test: mem map registration ...[2024-04-24 17:08:20.791101] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:11.661 [2024-04-24 17:08:20.791115] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:11.661 passed 00:04:11.661 Test: mem map adjacent registrations ...passed 00:04:11.661 00:04:11.661 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.661 suites 1 1 n/a 0 0 00:04:11.661 tests 4 4 4 0 0 00:04:11.661 asserts 152 152 152 0 n/a 00:04:11.661 00:04:11.661 Elapsed time = 0.126 seconds 00:04:11.661 00:04:11.661 real 0m0.133s 00:04:11.661 user 0m0.129s 00:04:11.661 sys 0m0.004s 00:04:11.661 17:08:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:11.661 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:11.661 ************************************ 00:04:11.661 END TEST env_memory 00:04:11.661 ************************************ 00:04:11.661 17:08:20 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:11.661 17:08:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.661 17:08:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.661 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:11.920 ************************************ 00:04:11.920 START TEST env_vtophys 00:04:11.920 ************************************ 00:04:11.920 17:08:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:11.920 EAL: lib.eal log level changed from notice to debug 00:04:11.920 EAL: Detected lcore 0 as core 0 on socket 0 00:04:11.920 EAL: Detected lcore 1 as core 1 on socket 0 00:04:11.920 EAL: Detected lcore 2 as core 2 on socket 0 00:04:11.920 EAL: Detected lcore 3 as core 3 on socket 0 00:04:11.920 EAL: Detected lcore 4 as core 4 on socket 0 00:04:11.920 EAL: Detected lcore 5 as core 5 on socket 0 00:04:11.920 EAL: Detected lcore 6 as core 6 on socket 0 00:04:11.920 EAL: Detected lcore 7 as core 9 on socket 0 00:04:11.920 EAL: Detected lcore 8 as core 10 on socket 0 00:04:11.920 EAL: Detected lcore 9 as core 11 on socket 0 00:04:11.920 EAL: Detected lcore 10 as core 12 on socket 0 00:04:11.920 EAL: Detected lcore 11 as core 13 on socket 0 00:04:11.920 EAL: Detected lcore 12 as core 16 on socket 0 00:04:11.920 EAL: Detected lcore 13 as core 17 on socket 0 00:04:11.920 EAL: Detected lcore 14 as core 18 on socket 0 00:04:11.920 EAL: Detected lcore 15 as core 19 on socket 0 00:04:11.920 EAL: Detected lcore 16 as core 20 on socket 0 00:04:11.920 EAL: Detected lcore 17 as core 21 on socket 0 00:04:11.920 EAL: Detected lcore 18 as core 24 on socket 0 00:04:11.920 EAL: Detected lcore 19 as core 25 on socket 0 00:04:11.920 EAL: Detected lcore 20 as core 26 on socket 0 00:04:11.920 EAL: Detected lcore 21 as core 27 on socket 0 00:04:11.920 EAL: Detected lcore 22 as core 28 on socket 0 00:04:11.920 EAL: Detected lcore 23 as core 29 on socket 0 00:04:11.920 EAL: Detected lcore 24 as core 0 on socket 1 00:04:11.920 EAL: Detected lcore 25 as core 1 on socket 1 00:04:11.920 EAL: Detected lcore 26 as core 2 on socket 1 00:04:11.920 EAL: Detected lcore 27 as core 3 on socket 1 00:04:11.920 EAL: Detected lcore 28 as core 4 on socket 1 00:04:11.920 EAL: Detected lcore 29 as core 5 on socket 1 00:04:11.920 EAL: Detected lcore 30 as core 6 on socket 1 00:04:11.920 EAL: Detected lcore 31 as core 8 on socket 1 00:04:11.920 EAL: Detected lcore 32 as core 9 on socket 1 00:04:11.920 EAL: Detected lcore 33 as core 10 on socket 1 00:04:11.920 EAL: Detected lcore 34 as core 11 on socket 1 00:04:11.920 EAL: Detected lcore 35 as core 12 on socket 1 00:04:11.920 EAL: Detected lcore 36 as core 13 on socket 1 00:04:11.920 EAL: Detected lcore 37 as core 16 on socket 1 00:04:11.920 EAL: Detected lcore 38 as core 17 on socket 1 00:04:11.920 EAL: Detected lcore 39 as core 18 on socket 1 00:04:11.920 EAL: Detected lcore 40 as core 19 on socket 1 00:04:11.920 EAL: Detected lcore 41 as core 20 on socket 1 00:04:11.920 EAL: Detected lcore 42 as core 21 on socket 1 00:04:11.920 EAL: Detected lcore 43 as core 25 on socket 1 00:04:11.920 EAL: Detected lcore 44 as core 26 on socket 1 00:04:11.920 EAL: Detected lcore 45 as core 27 on socket 1 00:04:11.920 EAL: Detected lcore 46 as core 28 on socket 1 00:04:11.920 EAL: Detected lcore 47 as core 29 on socket 1 00:04:11.920 EAL: Detected lcore 48 as core 0 on socket 0 00:04:11.920 EAL: Detected lcore 49 as core 1 on socket 0 00:04:11.920 EAL: Detected lcore 50 as core 2 on socket 0 00:04:11.920 EAL: Detected lcore 51 as core 3 on socket 0 00:04:11.920 EAL: Detected lcore 52 as core 4 on socket 0 00:04:11.920 EAL: Detected lcore 53 as core 5 on socket 0 00:04:11.920 EAL: Detected lcore 54 as core 6 on socket 0 00:04:11.920 EAL: Detected lcore 55 as core 9 on socket 0 00:04:11.920 EAL: Detected lcore 56 as core 10 on socket 0 00:04:11.920 EAL: Detected lcore 57 as core 11 on socket 0 00:04:11.920 EAL: Detected lcore 58 as core 12 on socket 0 00:04:11.920 EAL: Detected lcore 59 as core 13 on socket 0 00:04:11.920 EAL: Detected lcore 60 as core 16 on socket 0 00:04:11.920 EAL: Detected lcore 61 as core 17 on socket 0 00:04:11.920 EAL: Detected lcore 62 as core 18 on socket 0 00:04:11.920 EAL: Detected lcore 63 as core 19 on socket 0 00:04:11.920 EAL: Detected lcore 64 as core 20 on socket 0 00:04:11.920 EAL: Detected lcore 65 as core 21 on socket 0 00:04:11.920 EAL: Detected lcore 66 as core 24 on socket 0 00:04:11.920 EAL: Detected lcore 67 as core 25 on socket 0 00:04:11.920 EAL: Detected lcore 68 as core 26 on socket 0 00:04:11.920 EAL: Detected lcore 69 as core 27 on socket 0 00:04:11.920 EAL: Detected lcore 70 as core 28 on socket 0 00:04:11.920 EAL: Detected lcore 71 as core 29 on socket 0 00:04:11.920 EAL: Detected lcore 72 as core 0 on socket 1 00:04:11.920 EAL: Detected lcore 73 as core 1 on socket 1 00:04:11.920 EAL: Detected lcore 74 as core 2 on socket 1 00:04:11.920 EAL: Detected lcore 75 as core 3 on socket 1 00:04:11.920 EAL: Detected lcore 76 as core 4 on socket 1 00:04:11.920 EAL: Detected lcore 77 as core 5 on socket 1 00:04:11.920 EAL: Detected lcore 78 as core 6 on socket 1 00:04:11.920 EAL: Detected lcore 79 as core 8 on socket 1 00:04:11.920 EAL: Detected lcore 80 as core 9 on socket 1 00:04:11.920 EAL: Detected lcore 81 as core 10 on socket 1 00:04:11.920 EAL: Detected lcore 82 as core 11 on socket 1 00:04:11.920 EAL: Detected lcore 83 as core 12 on socket 1 00:04:11.920 EAL: Detected lcore 84 as core 13 on socket 1 00:04:11.920 EAL: Detected lcore 85 as core 16 on socket 1 00:04:11.920 EAL: Detected lcore 86 as core 17 on socket 1 00:04:11.920 EAL: Detected lcore 87 as core 18 on socket 1 00:04:11.920 EAL: Detected lcore 88 as core 19 on socket 1 00:04:11.920 EAL: Detected lcore 89 as core 20 on socket 1 00:04:11.920 EAL: Detected lcore 90 as core 21 on socket 1 00:04:11.920 EAL: Detected lcore 91 as core 25 on socket 1 00:04:11.920 EAL: Detected lcore 92 as core 26 on socket 1 00:04:11.920 EAL: Detected lcore 93 as core 27 on socket 1 00:04:11.920 EAL: Detected lcore 94 as core 28 on socket 1 00:04:11.920 EAL: Detected lcore 95 as core 29 on socket 1 00:04:11.920 EAL: Maximum logical cores by configuration: 128 00:04:11.920 EAL: Detected CPU lcores: 96 00:04:11.920 EAL: Detected NUMA nodes: 2 00:04:11.920 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:11.920 EAL: Detected shared linkage of DPDK 00:04:11.920 EAL: No shared files mode enabled, IPC will be disabled 00:04:11.920 EAL: Bus pci wants IOVA as 'DC' 00:04:11.920 EAL: Buses did not request a specific IOVA mode. 00:04:11.920 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:11.920 EAL: Selected IOVA mode 'VA' 00:04:11.920 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.920 EAL: Probing VFIO support... 00:04:11.920 EAL: IOMMU type 1 (Type 1) is supported 00:04:11.920 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:11.920 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:11.920 EAL: VFIO support initialized 00:04:11.920 EAL: Ask a virtual area of 0x2e000 bytes 00:04:11.920 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:11.920 EAL: Setting up physically contiguous memory... 00:04:11.920 EAL: Setting maximum number of open files to 524288 00:04:11.920 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:11.920 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:11.920 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:11.920 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.920 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:11.920 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.920 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.920 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:11.920 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:11.920 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.920 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:11.921 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.921 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.921 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:11.921 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:11.921 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.921 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:11.921 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.921 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.921 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:11.921 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:11.921 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.921 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:11.921 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.921 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.921 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:11.921 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:11.921 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:11.921 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.921 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:11.921 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:11.921 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.921 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:11.921 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:11.921 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.921 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:11.921 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:11.921 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.921 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:11.921 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:11.921 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.921 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:11.921 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:11.921 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.921 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:11.921 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:11.921 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.921 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:11.921 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:11.921 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.921 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:11.921 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:11.921 EAL: Hugepages will be freed exactly as allocated. 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: TSC frequency is ~2100000 KHz 00:04:11.921 EAL: Main lcore 0 is ready (tid=7fbc62724a00;cpuset=[0]) 00:04:11.921 EAL: Trying to obtain current memory policy. 00:04:11.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.921 EAL: Restoring previous memory policy: 0 00:04:11.921 EAL: request: mp_malloc_sync 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: Heap on socket 0 was expanded by 2MB 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:11.921 EAL: Mem event callback 'spdk:(nil)' registered 00:04:11.921 00:04:11.921 00:04:11.921 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.921 http://cunit.sourceforge.net/ 00:04:11.921 00:04:11.921 00:04:11.921 Suite: components_suite 00:04:11.921 Test: vtophys_malloc_test ...passed 00:04:11.921 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:11.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.921 EAL: Restoring previous memory policy: 4 00:04:11.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.921 EAL: request: mp_malloc_sync 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: Heap on socket 0 was expanded by 4MB 00:04:11.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.921 EAL: request: mp_malloc_sync 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: Heap on socket 0 was shrunk by 4MB 00:04:11.921 EAL: Trying to obtain current memory policy. 00:04:11.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.921 EAL: Restoring previous memory policy: 4 00:04:11.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.921 EAL: request: mp_malloc_sync 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: Heap on socket 0 was expanded by 6MB 00:04:11.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.921 EAL: request: mp_malloc_sync 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: Heap on socket 0 was shrunk by 6MB 00:04:11.921 EAL: Trying to obtain current memory policy. 00:04:11.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.921 EAL: Restoring previous memory policy: 4 00:04:11.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.921 EAL: request: mp_malloc_sync 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: Heap on socket 0 was expanded by 10MB 00:04:11.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.921 EAL: request: mp_malloc_sync 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: Heap on socket 0 was shrunk by 10MB 00:04:11.921 EAL: Trying to obtain current memory policy. 00:04:11.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.921 EAL: Restoring previous memory policy: 4 00:04:11.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.921 EAL: request: mp_malloc_sync 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: Heap on socket 0 was expanded by 18MB 00:04:11.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.921 EAL: request: mp_malloc_sync 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: Heap on socket 0 was shrunk by 18MB 00:04:11.921 EAL: Trying to obtain current memory policy. 00:04:11.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.921 EAL: Restoring previous memory policy: 4 00:04:11.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.921 EAL: request: mp_malloc_sync 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: Heap on socket 0 was expanded by 34MB 00:04:11.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.921 EAL: request: mp_malloc_sync 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: Heap on socket 0 was shrunk by 34MB 00:04:11.921 EAL: Trying to obtain current memory policy. 00:04:11.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.921 EAL: Restoring previous memory policy: 4 00:04:11.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.921 EAL: request: mp_malloc_sync 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: Heap on socket 0 was expanded by 66MB 00:04:11.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.921 EAL: request: mp_malloc_sync 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: Heap on socket 0 was shrunk by 66MB 00:04:11.921 EAL: Trying to obtain current memory policy. 00:04:11.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.921 EAL: Restoring previous memory policy: 4 00:04:11.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.921 EAL: request: mp_malloc_sync 00:04:11.921 EAL: No shared files mode enabled, IPC is disabled 00:04:11.921 EAL: Heap on socket 0 was expanded by 130MB 00:04:11.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.180 EAL: request: mp_malloc_sync 00:04:12.180 EAL: No shared files mode enabled, IPC is disabled 00:04:12.180 EAL: Heap on socket 0 was shrunk by 130MB 00:04:12.180 EAL: Trying to obtain current memory policy. 00:04:12.180 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.180 EAL: Restoring previous memory policy: 4 00:04:12.180 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.180 EAL: request: mp_malloc_sync 00:04:12.180 EAL: No shared files mode enabled, IPC is disabled 00:04:12.180 EAL: Heap on socket 0 was expanded by 258MB 00:04:12.180 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.180 EAL: request: mp_malloc_sync 00:04:12.180 EAL: No shared files mode enabled, IPC is disabled 00:04:12.180 EAL: Heap on socket 0 was shrunk by 258MB 00:04:12.180 EAL: Trying to obtain current memory policy. 00:04:12.180 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.180 EAL: Restoring previous memory policy: 4 00:04:12.180 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.180 EAL: request: mp_malloc_sync 00:04:12.180 EAL: No shared files mode enabled, IPC is disabled 00:04:12.180 EAL: Heap on socket 0 was expanded by 514MB 00:04:12.438 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.438 EAL: request: mp_malloc_sync 00:04:12.438 EAL: No shared files mode enabled, IPC is disabled 00:04:12.438 EAL: Heap on socket 0 was shrunk by 514MB 00:04:12.438 EAL: Trying to obtain current memory policy. 00:04:12.438 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.697 EAL: Restoring previous memory policy: 4 00:04:12.697 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.697 EAL: request: mp_malloc_sync 00:04:12.697 EAL: No shared files mode enabled, IPC is disabled 00:04:12.697 EAL: Heap on socket 0 was expanded by 1026MB 00:04:12.697 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.956 EAL: request: mp_malloc_sync 00:04:12.956 EAL: No shared files mode enabled, IPC is disabled 00:04:12.956 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:12.956 passed 00:04:12.956 00:04:12.956 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.956 suites 1 1 n/a 0 0 00:04:12.956 tests 2 2 2 0 0 00:04:12.956 asserts 497 497 497 0 n/a 00:04:12.956 00:04:12.956 Elapsed time = 0.958 seconds 00:04:12.956 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.956 EAL: request: mp_malloc_sync 00:04:12.956 EAL: No shared files mode enabled, IPC is disabled 00:04:12.956 EAL: Heap on socket 0 was shrunk by 2MB 00:04:12.956 EAL: No shared files mode enabled, IPC is disabled 00:04:12.956 EAL: No shared files mode enabled, IPC is disabled 00:04:12.956 EAL: No shared files mode enabled, IPC is disabled 00:04:12.956 00:04:12.956 real 0m1.066s 00:04:12.956 user 0m0.629s 00:04:12.956 sys 0m0.412s 00:04:12.956 17:08:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:12.956 17:08:22 -- common/autotest_common.sh@10 -- # set +x 00:04:12.956 ************************************ 00:04:12.956 END TEST env_vtophys 00:04:12.956 ************************************ 00:04:12.956 17:08:22 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:12.956 17:08:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.956 17:08:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.956 17:08:22 -- common/autotest_common.sh@10 -- # set +x 00:04:13.215 ************************************ 00:04:13.215 START TEST env_pci 00:04:13.215 ************************************ 00:04:13.215 17:08:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:13.215 00:04:13.215 00:04:13.215 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.215 http://cunit.sourceforge.net/ 00:04:13.215 00:04:13.215 00:04:13.215 Suite: pci 00:04:13.215 Test: pci_hook ...[2024-04-24 17:08:22.237895] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2919634 has claimed it 00:04:13.215 EAL: Cannot find device (10000:00:01.0) 00:04:13.215 EAL: Failed to attach device on primary process 00:04:13.215 passed 00:04:13.215 00:04:13.215 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.215 suites 1 1 n/a 0 0 00:04:13.215 tests 1 1 1 0 0 00:04:13.215 asserts 25 25 25 0 n/a 00:04:13.215 00:04:13.215 Elapsed time = 0.023 seconds 00:04:13.215 00:04:13.215 real 0m0.040s 00:04:13.215 user 0m0.008s 00:04:13.215 sys 0m0.032s 00:04:13.215 17:08:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:13.215 17:08:22 -- common/autotest_common.sh@10 -- # set +x 00:04:13.215 ************************************ 00:04:13.215 END TEST env_pci 00:04:13.215 ************************************ 00:04:13.215 17:08:22 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:13.215 17:08:22 -- env/env.sh@15 -- # uname 00:04:13.215 17:08:22 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:13.215 17:08:22 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:13.215 17:08:22 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.215 17:08:22 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:13.215 17:08:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.215 17:08:22 -- common/autotest_common.sh@10 -- # set +x 00:04:13.215 ************************************ 00:04:13.215 START TEST env_dpdk_post_init 00:04:13.215 ************************************ 00:04:13.215 17:08:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.215 EAL: Detected CPU lcores: 96 00:04:13.215 EAL: Detected NUMA nodes: 2 00:04:13.215 EAL: Detected shared linkage of DPDK 00:04:13.215 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.474 EAL: Selected IOVA mode 'VA' 00:04:13.474 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.474 EAL: VFIO support initialized 00:04:13.474 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.474 EAL: Using IOMMU type 1 (Type 1) 00:04:13.474 EAL: Ignore mapping IO port bar(1) 00:04:13.474 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:13.474 EAL: Ignore mapping IO port bar(1) 00:04:13.474 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:13.474 EAL: Ignore mapping IO port bar(1) 00:04:13.474 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:13.474 EAL: Ignore mapping IO port bar(1) 00:04:13.474 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:13.474 EAL: Ignore mapping IO port bar(1) 00:04:13.474 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:13.474 EAL: Ignore mapping IO port bar(1) 00:04:13.474 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:13.474 EAL: Ignore mapping IO port bar(1) 00:04:13.474 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:13.474 EAL: Ignore mapping IO port bar(1) 00:04:13.474 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:14.409 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:04:14.409 EAL: Ignore mapping IO port bar(1) 00:04:14.409 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:14.409 EAL: Ignore mapping IO port bar(1) 00:04:14.409 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:14.409 EAL: Ignore mapping IO port bar(1) 00:04:14.409 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:14.409 EAL: Ignore mapping IO port bar(1) 00:04:14.409 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:14.409 EAL: Ignore mapping IO port bar(1) 00:04:14.409 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:14.409 EAL: Ignore mapping IO port bar(1) 00:04:14.409 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:14.409 EAL: Ignore mapping IO port bar(1) 00:04:14.409 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:14.409 EAL: Ignore mapping IO port bar(1) 00:04:14.409 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:17.778 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:04:17.778 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001020000 00:04:18.347 Starting DPDK initialization... 00:04:18.347 Starting SPDK post initialization... 00:04:18.347 SPDK NVMe probe 00:04:18.347 Attaching to 0000:5f:00.0 00:04:18.347 Attached to 0000:5f:00.0 00:04:18.347 Cleaning up... 00:04:18.347 00:04:18.347 real 0m4.956s 00:04:18.347 user 0m3.867s 00:04:18.347 sys 0m0.153s 00:04:18.347 17:08:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:18.347 17:08:27 -- common/autotest_common.sh@10 -- # set +x 00:04:18.347 ************************************ 00:04:18.347 END TEST env_dpdk_post_init 00:04:18.347 ************************************ 00:04:18.347 17:08:27 -- env/env.sh@26 -- # uname 00:04:18.347 17:08:27 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:18.347 17:08:27 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:18.347 17:08:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.347 17:08:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.347 17:08:27 -- common/autotest_common.sh@10 -- # set +x 00:04:18.347 ************************************ 00:04:18.347 START TEST env_mem_callbacks 00:04:18.347 ************************************ 00:04:18.347 17:08:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:18.347 EAL: Detected CPU lcores: 96 00:04:18.347 EAL: Detected NUMA nodes: 2 00:04:18.347 EAL: Detected shared linkage of DPDK 00:04:18.347 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:18.347 EAL: Selected IOVA mode 'VA' 00:04:18.347 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.347 EAL: VFIO support initialized 00:04:18.347 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:18.347 00:04:18.347 00:04:18.347 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.347 http://cunit.sourceforge.net/ 00:04:18.347 00:04:18.347 00:04:18.347 Suite: memory 00:04:18.347 Test: test ... 00:04:18.347 register 0x200000200000 2097152 00:04:18.347 malloc 3145728 00:04:18.347 register 0x200000400000 4194304 00:04:18.347 buf 0x200000500000 len 3145728 PASSED 00:04:18.347 malloc 64 00:04:18.347 buf 0x2000004fff40 len 64 PASSED 00:04:18.347 malloc 4194304 00:04:18.347 register 0x200000800000 6291456 00:04:18.347 buf 0x200000a00000 len 4194304 PASSED 00:04:18.347 free 0x200000500000 3145728 00:04:18.347 free 0x2000004fff40 64 00:04:18.347 unregister 0x200000400000 4194304 PASSED 00:04:18.347 free 0x200000a00000 4194304 00:04:18.347 unregister 0x200000800000 6291456 PASSED 00:04:18.347 malloc 8388608 00:04:18.347 register 0x200000400000 10485760 00:04:18.347 buf 0x200000600000 len 8388608 PASSED 00:04:18.347 free 0x200000600000 8388608 00:04:18.347 unregister 0x200000400000 10485760 PASSED 00:04:18.347 passed 00:04:18.347 00:04:18.347 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.347 suites 1 1 n/a 0 0 00:04:18.347 tests 1 1 1 0 0 00:04:18.347 asserts 15 15 15 0 n/a 00:04:18.347 00:04:18.347 Elapsed time = 0.005 seconds 00:04:18.347 00:04:18.347 real 0m0.043s 00:04:18.347 user 0m0.017s 00:04:18.347 sys 0m0.026s 00:04:18.347 17:08:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:18.347 17:08:27 -- common/autotest_common.sh@10 -- # set +x 00:04:18.347 ************************************ 00:04:18.347 END TEST env_mem_callbacks 00:04:18.347 ************************************ 00:04:18.606 00:04:18.606 real 0m7.111s 00:04:18.606 user 0m4.975s 00:04:18.606 sys 0m1.147s 00:04:18.606 17:08:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:18.606 17:08:27 -- common/autotest_common.sh@10 -- # set +x 00:04:18.606 ************************************ 00:04:18.606 END TEST env 00:04:18.606 ************************************ 00:04:18.606 17:08:27 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:18.606 17:08:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.606 17:08:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.606 17:08:27 -- common/autotest_common.sh@10 -- # set +x 00:04:18.606 ************************************ 00:04:18.606 START TEST rpc 00:04:18.606 ************************************ 00:04:18.606 17:08:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:18.606 * Looking for test storage... 00:04:18.606 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:18.606 17:08:27 -- rpc/rpc.sh@65 -- # spdk_pid=2920764 00:04:18.606 17:08:27 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.606 17:08:27 -- rpc/rpc.sh@67 -- # waitforlisten 2920764 00:04:18.606 17:08:27 -- common/autotest_common.sh@817 -- # '[' -z 2920764 ']' 00:04:18.606 17:08:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.606 17:08:27 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:18.606 17:08:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:18.606 17:08:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.606 17:08:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:18.606 17:08:27 -- common/autotest_common.sh@10 -- # set +x 00:04:18.865 [2024-04-24 17:08:27.881547] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:04:18.865 [2024-04-24 17:08:27.881603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2920764 ] 00:04:18.865 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.865 [2024-04-24 17:08:27.936667] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.865 [2024-04-24 17:08:28.014550] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:18.865 [2024-04-24 17:08:28.014588] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2920764' to capture a snapshot of events at runtime. 00:04:18.865 [2024-04-24 17:08:28.014595] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:18.865 [2024-04-24 17:08:28.014600] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:18.865 [2024-04-24 17:08:28.014605] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2920764 for offline analysis/debug. 00:04:18.865 [2024-04-24 17:08:28.014637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.430 17:08:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:19.430 17:08:28 -- common/autotest_common.sh@850 -- # return 0 00:04:19.430 17:08:28 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:19.430 17:08:28 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:19.430 17:08:28 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:19.430 17:08:28 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:19.430 17:08:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.430 17:08:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.430 17:08:28 -- common/autotest_common.sh@10 -- # set +x 00:04:19.688 ************************************ 00:04:19.688 START TEST rpc_integrity 00:04:19.688 ************************************ 00:04:19.688 17:08:28 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:19.688 17:08:28 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:19.688 17:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.688 17:08:28 -- common/autotest_common.sh@10 -- # set +x 00:04:19.688 17:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.688 17:08:28 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:19.688 17:08:28 -- rpc/rpc.sh@13 -- # jq length 00:04:19.688 17:08:28 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:19.688 17:08:28 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.688 17:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.688 17:08:28 -- common/autotest_common.sh@10 -- # set +x 00:04:19.688 17:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.688 17:08:28 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:19.688 17:08:28 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:19.688 17:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.688 17:08:28 -- common/autotest_common.sh@10 -- # set +x 00:04:19.688 17:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.688 17:08:28 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:19.688 { 00:04:19.688 "name": "Malloc0", 00:04:19.688 "aliases": [ 00:04:19.688 "1e62b36c-db35-431c-b916-ad64b746e7e0" 00:04:19.688 ], 00:04:19.688 "product_name": "Malloc disk", 00:04:19.688 "block_size": 512, 00:04:19.688 "num_blocks": 16384, 00:04:19.688 "uuid": "1e62b36c-db35-431c-b916-ad64b746e7e0", 00:04:19.688 "assigned_rate_limits": { 00:04:19.688 "rw_ios_per_sec": 0, 00:04:19.688 "rw_mbytes_per_sec": 0, 00:04:19.688 "r_mbytes_per_sec": 0, 00:04:19.688 "w_mbytes_per_sec": 0 00:04:19.688 }, 00:04:19.688 "claimed": false, 00:04:19.688 "zoned": false, 00:04:19.688 "supported_io_types": { 00:04:19.688 "read": true, 00:04:19.688 "write": true, 00:04:19.688 "unmap": true, 00:04:19.688 "write_zeroes": true, 00:04:19.688 "flush": true, 00:04:19.688 "reset": true, 00:04:19.688 "compare": false, 00:04:19.688 "compare_and_write": false, 00:04:19.688 "abort": true, 00:04:19.688 "nvme_admin": false, 00:04:19.688 "nvme_io": false 00:04:19.688 }, 00:04:19.688 "memory_domains": [ 00:04:19.688 { 00:04:19.688 "dma_device_id": "system", 00:04:19.688 "dma_device_type": 1 00:04:19.688 }, 00:04:19.688 { 00:04:19.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.688 "dma_device_type": 2 00:04:19.688 } 00:04:19.688 ], 00:04:19.688 "driver_specific": {} 00:04:19.688 } 00:04:19.688 ]' 00:04:19.688 17:08:28 -- rpc/rpc.sh@17 -- # jq length 00:04:19.688 17:08:28 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:19.688 17:08:28 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:19.688 17:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.688 17:08:28 -- common/autotest_common.sh@10 -- # set +x 00:04:19.688 [2024-04-24 17:08:28.913416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:19.688 [2024-04-24 17:08:28.913445] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:19.688 [2024-04-24 17:08:28.913456] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1908e00 00:04:19.688 [2024-04-24 17:08:28.913462] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:19.688 [2024-04-24 17:08:28.914518] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:19.688 [2024-04-24 17:08:28.914537] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:19.688 Passthru0 00:04:19.688 17:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.688 17:08:28 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:19.688 17:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.688 17:08:28 -- common/autotest_common.sh@10 -- # set +x 00:04:19.947 17:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.947 17:08:28 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:19.947 { 00:04:19.947 "name": "Malloc0", 00:04:19.947 "aliases": [ 00:04:19.947 "1e62b36c-db35-431c-b916-ad64b746e7e0" 00:04:19.947 ], 00:04:19.947 "product_name": "Malloc disk", 00:04:19.947 "block_size": 512, 00:04:19.947 "num_blocks": 16384, 00:04:19.947 "uuid": "1e62b36c-db35-431c-b916-ad64b746e7e0", 00:04:19.947 "assigned_rate_limits": { 00:04:19.947 "rw_ios_per_sec": 0, 00:04:19.947 "rw_mbytes_per_sec": 0, 00:04:19.947 "r_mbytes_per_sec": 0, 00:04:19.947 "w_mbytes_per_sec": 0 00:04:19.947 }, 00:04:19.947 "claimed": true, 00:04:19.947 "claim_type": "exclusive_write", 00:04:19.947 "zoned": false, 00:04:19.947 "supported_io_types": { 00:04:19.947 "read": true, 00:04:19.947 "write": true, 00:04:19.947 "unmap": true, 00:04:19.947 "write_zeroes": true, 00:04:19.947 "flush": true, 00:04:19.947 "reset": true, 00:04:19.947 "compare": false, 00:04:19.947 "compare_and_write": false, 00:04:19.947 "abort": true, 00:04:19.947 "nvme_admin": false, 00:04:19.947 "nvme_io": false 00:04:19.947 }, 00:04:19.947 "memory_domains": [ 00:04:19.947 { 00:04:19.947 "dma_device_id": "system", 00:04:19.947 "dma_device_type": 1 00:04:19.947 }, 00:04:19.947 { 00:04:19.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.947 "dma_device_type": 2 00:04:19.947 } 00:04:19.947 ], 00:04:19.947 "driver_specific": {} 00:04:19.947 }, 00:04:19.947 { 00:04:19.947 "name": "Passthru0", 00:04:19.947 "aliases": [ 00:04:19.947 "cca0d56b-f2ce-5032-b753-04d1a258eb6a" 00:04:19.947 ], 00:04:19.947 "product_name": "passthru", 00:04:19.947 "block_size": 512, 00:04:19.947 "num_blocks": 16384, 00:04:19.947 "uuid": "cca0d56b-f2ce-5032-b753-04d1a258eb6a", 00:04:19.947 "assigned_rate_limits": { 00:04:19.947 "rw_ios_per_sec": 0, 00:04:19.947 "rw_mbytes_per_sec": 0, 00:04:19.947 "r_mbytes_per_sec": 0, 00:04:19.947 "w_mbytes_per_sec": 0 00:04:19.947 }, 00:04:19.947 "claimed": false, 00:04:19.947 "zoned": false, 00:04:19.947 "supported_io_types": { 00:04:19.947 "read": true, 00:04:19.947 "write": true, 00:04:19.947 "unmap": true, 00:04:19.947 "write_zeroes": true, 00:04:19.947 "flush": true, 00:04:19.947 "reset": true, 00:04:19.947 "compare": false, 00:04:19.947 "compare_and_write": false, 00:04:19.947 "abort": true, 00:04:19.947 "nvme_admin": false, 00:04:19.947 "nvme_io": false 00:04:19.947 }, 00:04:19.947 "memory_domains": [ 00:04:19.947 { 00:04:19.947 "dma_device_id": "system", 00:04:19.947 "dma_device_type": 1 00:04:19.947 }, 00:04:19.947 { 00:04:19.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.947 "dma_device_type": 2 00:04:19.947 } 00:04:19.947 ], 00:04:19.947 "driver_specific": { 00:04:19.947 "passthru": { 00:04:19.947 "name": "Passthru0", 00:04:19.947 "base_bdev_name": "Malloc0" 00:04:19.947 } 00:04:19.947 } 00:04:19.947 } 00:04:19.947 ]' 00:04:19.947 17:08:28 -- rpc/rpc.sh@21 -- # jq length 00:04:19.947 17:08:28 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:19.947 17:08:28 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:19.947 17:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.947 17:08:28 -- common/autotest_common.sh@10 -- # set +x 00:04:19.947 17:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.947 17:08:28 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:19.947 17:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.947 17:08:28 -- common/autotest_common.sh@10 -- # set +x 00:04:19.947 17:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.947 17:08:28 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:19.947 17:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.947 17:08:28 -- common/autotest_common.sh@10 -- # set +x 00:04:19.947 17:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.947 17:08:29 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:19.947 17:08:29 -- rpc/rpc.sh@26 -- # jq length 00:04:19.947 17:08:29 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:19.947 00:04:19.947 real 0m0.259s 00:04:19.947 user 0m0.166s 00:04:19.947 sys 0m0.029s 00:04:19.947 17:08:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:19.947 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:19.947 ************************************ 00:04:19.947 END TEST rpc_integrity 00:04:19.947 ************************************ 00:04:19.947 17:08:29 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:19.947 17:08:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.947 17:08:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.947 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.205 ************************************ 00:04:20.205 START TEST rpc_plugins 00:04:20.205 ************************************ 00:04:20.205 17:08:29 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:20.205 17:08:29 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:20.205 17:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:20.205 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.205 17:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:20.205 17:08:29 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:20.205 17:08:29 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:20.205 17:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:20.205 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.205 17:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:20.205 17:08:29 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:20.205 { 00:04:20.205 "name": "Malloc1", 00:04:20.205 "aliases": [ 00:04:20.205 "73b293d9-a373-4d77-8c02-e8729addd62b" 00:04:20.205 ], 00:04:20.205 "product_name": "Malloc disk", 00:04:20.205 "block_size": 4096, 00:04:20.205 "num_blocks": 256, 00:04:20.205 "uuid": "73b293d9-a373-4d77-8c02-e8729addd62b", 00:04:20.205 "assigned_rate_limits": { 00:04:20.205 "rw_ios_per_sec": 0, 00:04:20.205 "rw_mbytes_per_sec": 0, 00:04:20.205 "r_mbytes_per_sec": 0, 00:04:20.205 "w_mbytes_per_sec": 0 00:04:20.205 }, 00:04:20.205 "claimed": false, 00:04:20.205 "zoned": false, 00:04:20.205 "supported_io_types": { 00:04:20.206 "read": true, 00:04:20.206 "write": true, 00:04:20.206 "unmap": true, 00:04:20.206 "write_zeroes": true, 00:04:20.206 "flush": true, 00:04:20.206 "reset": true, 00:04:20.206 "compare": false, 00:04:20.206 "compare_and_write": false, 00:04:20.206 "abort": true, 00:04:20.206 "nvme_admin": false, 00:04:20.206 "nvme_io": false 00:04:20.206 }, 00:04:20.206 "memory_domains": [ 00:04:20.206 { 00:04:20.206 "dma_device_id": "system", 00:04:20.206 "dma_device_type": 1 00:04:20.206 }, 00:04:20.206 { 00:04:20.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.206 "dma_device_type": 2 00:04:20.206 } 00:04:20.206 ], 00:04:20.206 "driver_specific": {} 00:04:20.206 } 00:04:20.206 ]' 00:04:20.206 17:08:29 -- rpc/rpc.sh@32 -- # jq length 00:04:20.206 17:08:29 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:20.206 17:08:29 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:20.206 17:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:20.206 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.206 17:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:20.206 17:08:29 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:20.206 17:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:20.206 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.206 17:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:20.206 17:08:29 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:20.206 17:08:29 -- rpc/rpc.sh@36 -- # jq length 00:04:20.206 17:08:29 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:20.206 00:04:20.206 real 0m0.132s 00:04:20.206 user 0m0.092s 00:04:20.206 sys 0m0.011s 00:04:20.206 17:08:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:20.206 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.206 ************************************ 00:04:20.206 END TEST rpc_plugins 00:04:20.206 ************************************ 00:04:20.206 17:08:29 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:20.206 17:08:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.206 17:08:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.206 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.464 ************************************ 00:04:20.464 START TEST rpc_trace_cmd_test 00:04:20.464 ************************************ 00:04:20.464 17:08:29 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:20.464 17:08:29 -- rpc/rpc.sh@40 -- # local info 00:04:20.464 17:08:29 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:20.464 17:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:20.464 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.464 17:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:20.464 17:08:29 -- rpc/rpc.sh@42 -- # info='{ 00:04:20.464 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2920764", 00:04:20.464 "tpoint_group_mask": "0x8", 00:04:20.464 "iscsi_conn": { 00:04:20.464 "mask": "0x2", 00:04:20.464 "tpoint_mask": "0x0" 00:04:20.464 }, 00:04:20.464 "scsi": { 00:04:20.464 "mask": "0x4", 00:04:20.464 "tpoint_mask": "0x0" 00:04:20.464 }, 00:04:20.464 "bdev": { 00:04:20.464 "mask": "0x8", 00:04:20.464 "tpoint_mask": "0xffffffffffffffff" 00:04:20.464 }, 00:04:20.464 "nvmf_rdma": { 00:04:20.464 "mask": "0x10", 00:04:20.464 "tpoint_mask": "0x0" 00:04:20.464 }, 00:04:20.464 "nvmf_tcp": { 00:04:20.464 "mask": "0x20", 00:04:20.464 "tpoint_mask": "0x0" 00:04:20.464 }, 00:04:20.464 "ftl": { 00:04:20.464 "mask": "0x40", 00:04:20.464 "tpoint_mask": "0x0" 00:04:20.464 }, 00:04:20.464 "blobfs": { 00:04:20.464 "mask": "0x80", 00:04:20.464 "tpoint_mask": "0x0" 00:04:20.464 }, 00:04:20.464 "dsa": { 00:04:20.464 "mask": "0x200", 00:04:20.464 "tpoint_mask": "0x0" 00:04:20.464 }, 00:04:20.464 "thread": { 00:04:20.464 "mask": "0x400", 00:04:20.464 "tpoint_mask": "0x0" 00:04:20.464 }, 00:04:20.464 "nvme_pcie": { 00:04:20.464 "mask": "0x800", 00:04:20.464 "tpoint_mask": "0x0" 00:04:20.464 }, 00:04:20.464 "iaa": { 00:04:20.464 "mask": "0x1000", 00:04:20.464 "tpoint_mask": "0x0" 00:04:20.464 }, 00:04:20.464 "nvme_tcp": { 00:04:20.464 "mask": "0x2000", 00:04:20.464 "tpoint_mask": "0x0" 00:04:20.464 }, 00:04:20.464 "bdev_nvme": { 00:04:20.464 "mask": "0x4000", 00:04:20.464 "tpoint_mask": "0x0" 00:04:20.464 }, 00:04:20.464 "sock": { 00:04:20.464 "mask": "0x8000", 00:04:20.464 "tpoint_mask": "0x0" 00:04:20.464 } 00:04:20.464 }' 00:04:20.464 17:08:29 -- rpc/rpc.sh@43 -- # jq length 00:04:20.464 17:08:29 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:20.465 17:08:29 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:20.465 17:08:29 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:20.465 17:08:29 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:20.465 17:08:29 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:20.465 17:08:29 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:20.465 17:08:29 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:20.465 17:08:29 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:20.746 17:08:29 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:20.746 00:04:20.746 real 0m0.206s 00:04:20.746 user 0m0.175s 00:04:20.746 sys 0m0.024s 00:04:20.746 17:08:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:20.746 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.746 ************************************ 00:04:20.746 END TEST rpc_trace_cmd_test 00:04:20.746 ************************************ 00:04:20.746 17:08:29 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:20.746 17:08:29 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:20.746 17:08:29 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:20.746 17:08:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.746 17:08:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.746 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.746 ************************************ 00:04:20.746 START TEST rpc_daemon_integrity 00:04:20.746 ************************************ 00:04:20.746 17:08:29 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:20.746 17:08:29 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:20.746 17:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:20.746 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.746 17:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:20.746 17:08:29 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:20.746 17:08:29 -- rpc/rpc.sh@13 -- # jq length 00:04:20.746 17:08:29 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:20.746 17:08:29 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.747 17:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:20.747 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.747 17:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:20.747 17:08:29 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:20.747 17:08:29 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:20.747 17:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:20.747 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:20.747 17:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:20.747 17:08:29 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:20.747 { 00:04:20.747 "name": "Malloc2", 00:04:20.747 "aliases": [ 00:04:20.747 "e828b3ff-d722-4d91-9a3f-8afc4607d801" 00:04:20.747 ], 00:04:20.747 "product_name": "Malloc disk", 00:04:20.747 "block_size": 512, 00:04:20.747 "num_blocks": 16384, 00:04:20.747 "uuid": "e828b3ff-d722-4d91-9a3f-8afc4607d801", 00:04:20.747 "assigned_rate_limits": { 00:04:20.747 "rw_ios_per_sec": 0, 00:04:20.747 "rw_mbytes_per_sec": 0, 00:04:20.747 "r_mbytes_per_sec": 0, 00:04:20.747 "w_mbytes_per_sec": 0 00:04:20.747 }, 00:04:20.747 "claimed": false, 00:04:20.747 "zoned": false, 00:04:20.747 "supported_io_types": { 00:04:20.747 "read": true, 00:04:20.747 "write": true, 00:04:20.747 "unmap": true, 00:04:20.747 "write_zeroes": true, 00:04:20.747 "flush": true, 00:04:20.747 "reset": true, 00:04:20.747 "compare": false, 00:04:20.747 "compare_and_write": false, 00:04:20.747 "abort": true, 00:04:20.747 "nvme_admin": false, 00:04:20.747 "nvme_io": false 00:04:20.747 }, 00:04:20.747 "memory_domains": [ 00:04:20.747 { 00:04:20.747 "dma_device_id": "system", 00:04:20.747 "dma_device_type": 1 00:04:20.747 }, 00:04:20.747 { 00:04:20.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.747 "dma_device_type": 2 00:04:20.747 } 00:04:20.747 ], 00:04:20.747 "driver_specific": {} 00:04:20.747 } 00:04:20.747 ]' 00:04:20.747 17:08:29 -- rpc/rpc.sh@17 -- # jq length 00:04:21.006 17:08:30 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:21.006 17:08:30 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:21.006 17:08:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:21.006 17:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:21.006 [2024-04-24 17:08:30.012480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:21.006 [2024-04-24 17:08:30.012509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.006 [2024-04-24 17:08:30.012524] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1908ae0 00:04:21.006 [2024-04-24 17:08:30.012530] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.006 [2024-04-24 17:08:30.013519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.006 [2024-04-24 17:08:30.013539] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:21.006 Passthru0 00:04:21.006 17:08:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:21.006 17:08:30 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:21.006 17:08:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:21.006 17:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:21.006 17:08:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:21.006 17:08:30 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.006 { 00:04:21.006 "name": "Malloc2", 00:04:21.006 "aliases": [ 00:04:21.006 "e828b3ff-d722-4d91-9a3f-8afc4607d801" 00:04:21.006 ], 00:04:21.006 "product_name": "Malloc disk", 00:04:21.006 "block_size": 512, 00:04:21.006 "num_blocks": 16384, 00:04:21.006 "uuid": "e828b3ff-d722-4d91-9a3f-8afc4607d801", 00:04:21.006 "assigned_rate_limits": { 00:04:21.006 "rw_ios_per_sec": 0, 00:04:21.006 "rw_mbytes_per_sec": 0, 00:04:21.006 "r_mbytes_per_sec": 0, 00:04:21.006 "w_mbytes_per_sec": 0 00:04:21.006 }, 00:04:21.006 "claimed": true, 00:04:21.006 "claim_type": "exclusive_write", 00:04:21.006 "zoned": false, 00:04:21.006 "supported_io_types": { 00:04:21.006 "read": true, 00:04:21.006 "write": true, 00:04:21.006 "unmap": true, 00:04:21.006 "write_zeroes": true, 00:04:21.006 "flush": true, 00:04:21.006 "reset": true, 00:04:21.006 "compare": false, 00:04:21.006 "compare_and_write": false, 00:04:21.006 "abort": true, 00:04:21.006 "nvme_admin": false, 00:04:21.006 "nvme_io": false 00:04:21.006 }, 00:04:21.006 "memory_domains": [ 00:04:21.006 { 00:04:21.006 "dma_device_id": "system", 00:04:21.006 "dma_device_type": 1 00:04:21.006 }, 00:04:21.006 { 00:04:21.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.006 "dma_device_type": 2 00:04:21.006 } 00:04:21.006 ], 00:04:21.006 "driver_specific": {} 00:04:21.006 }, 00:04:21.006 { 00:04:21.006 "name": "Passthru0", 00:04:21.006 "aliases": [ 00:04:21.006 "e5f70a29-29bc-52a4-b94e-037cb230e333" 00:04:21.006 ], 00:04:21.006 "product_name": "passthru", 00:04:21.006 "block_size": 512, 00:04:21.006 "num_blocks": 16384, 00:04:21.006 "uuid": "e5f70a29-29bc-52a4-b94e-037cb230e333", 00:04:21.006 "assigned_rate_limits": { 00:04:21.006 "rw_ios_per_sec": 0, 00:04:21.006 "rw_mbytes_per_sec": 0, 00:04:21.006 "r_mbytes_per_sec": 0, 00:04:21.006 "w_mbytes_per_sec": 0 00:04:21.006 }, 00:04:21.006 "claimed": false, 00:04:21.006 "zoned": false, 00:04:21.006 "supported_io_types": { 00:04:21.006 "read": true, 00:04:21.006 "write": true, 00:04:21.006 "unmap": true, 00:04:21.006 "write_zeroes": true, 00:04:21.006 "flush": true, 00:04:21.006 "reset": true, 00:04:21.006 "compare": false, 00:04:21.006 "compare_and_write": false, 00:04:21.006 "abort": true, 00:04:21.006 "nvme_admin": false, 00:04:21.006 "nvme_io": false 00:04:21.006 }, 00:04:21.006 "memory_domains": [ 00:04:21.006 { 00:04:21.006 "dma_device_id": "system", 00:04:21.006 "dma_device_type": 1 00:04:21.006 }, 00:04:21.006 { 00:04:21.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.006 "dma_device_type": 2 00:04:21.006 } 00:04:21.006 ], 00:04:21.006 "driver_specific": { 00:04:21.006 "passthru": { 00:04:21.006 "name": "Passthru0", 00:04:21.006 "base_bdev_name": "Malloc2" 00:04:21.006 } 00:04:21.006 } 00:04:21.006 } 00:04:21.006 ]' 00:04:21.006 17:08:30 -- rpc/rpc.sh@21 -- # jq length 00:04:21.006 17:08:30 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.006 17:08:30 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.006 17:08:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:21.006 17:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:21.006 17:08:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:21.006 17:08:30 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:21.006 17:08:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:21.006 17:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:21.006 17:08:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:21.006 17:08:30 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:21.006 17:08:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:21.006 17:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:21.006 17:08:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:21.006 17:08:30 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:21.006 17:08:30 -- rpc/rpc.sh@26 -- # jq length 00:04:21.006 17:08:30 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:21.006 00:04:21.006 real 0m0.277s 00:04:21.006 user 0m0.179s 00:04:21.006 sys 0m0.028s 00:04:21.006 17:08:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:21.006 17:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:21.006 ************************************ 00:04:21.006 END TEST rpc_daemon_integrity 00:04:21.006 ************************************ 00:04:21.006 17:08:30 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:21.006 17:08:30 -- rpc/rpc.sh@84 -- # killprocess 2920764 00:04:21.006 17:08:30 -- common/autotest_common.sh@936 -- # '[' -z 2920764 ']' 00:04:21.006 17:08:30 -- common/autotest_common.sh@940 -- # kill -0 2920764 00:04:21.006 17:08:30 -- common/autotest_common.sh@941 -- # uname 00:04:21.006 17:08:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:21.006 17:08:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2920764 00:04:21.006 17:08:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:21.006 17:08:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:21.006 17:08:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2920764' 00:04:21.006 killing process with pid 2920764 00:04:21.006 17:08:30 -- common/autotest_common.sh@955 -- # kill 2920764 00:04:21.006 17:08:30 -- common/autotest_common.sh@960 -- # wait 2920764 00:04:21.572 00:04:21.572 real 0m2.816s 00:04:21.572 user 0m3.680s 00:04:21.572 sys 0m0.805s 00:04:21.572 17:08:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:21.572 17:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:21.572 ************************************ 00:04:21.572 END TEST rpc 00:04:21.572 ************************************ 00:04:21.572 17:08:30 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:21.572 17:08:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.572 17:08:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.572 17:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:21.572 ************************************ 00:04:21.572 START TEST skip_rpc 00:04:21.572 ************************************ 00:04:21.572 17:08:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:21.572 * Looking for test storage... 00:04:21.572 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:21.572 17:08:30 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:21.572 17:08:30 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:21.572 17:08:30 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:21.572 17:08:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.572 17:08:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.572 17:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:21.830 ************************************ 00:04:21.830 START TEST skip_rpc 00:04:21.830 ************************************ 00:04:21.830 17:08:30 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:21.830 17:08:30 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2921513 00:04:21.830 17:08:30 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.830 17:08:30 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:21.830 17:08:30 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:21.830 [2024-04-24 17:08:30.968588] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:04:21.830 [2024-04-24 17:08:30.968623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2921513 ] 00:04:21.830 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.830 [2024-04-24 17:08:31.021528] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.088 [2024-04-24 17:08:31.091493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.352 17:08:35 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:27.352 17:08:35 -- common/autotest_common.sh@638 -- # local es=0 00:04:27.352 17:08:35 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:27.352 17:08:35 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:27.352 17:08:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:27.352 17:08:35 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:27.352 17:08:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:27.352 17:08:35 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:27.352 17:08:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.352 17:08:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.352 17:08:35 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:27.352 17:08:35 -- common/autotest_common.sh@641 -- # es=1 00:04:27.352 17:08:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:27.352 17:08:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:27.352 17:08:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:27.352 17:08:35 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:27.352 17:08:35 -- rpc/skip_rpc.sh@23 -- # killprocess 2921513 00:04:27.352 17:08:35 -- common/autotest_common.sh@936 -- # '[' -z 2921513 ']' 00:04:27.352 17:08:35 -- common/autotest_common.sh@940 -- # kill -0 2921513 00:04:27.352 17:08:35 -- common/autotest_common.sh@941 -- # uname 00:04:27.352 17:08:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:27.352 17:08:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2921513 00:04:27.352 17:08:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:27.352 17:08:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:27.352 17:08:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2921513' 00:04:27.352 killing process with pid 2921513 00:04:27.352 17:08:35 -- common/autotest_common.sh@955 -- # kill 2921513 00:04:27.352 17:08:35 -- common/autotest_common.sh@960 -- # wait 2921513 00:04:27.352 00:04:27.352 real 0m5.388s 00:04:27.352 user 0m5.164s 00:04:27.352 sys 0m0.257s 00:04:27.352 17:08:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.352 17:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:27.352 ************************************ 00:04:27.352 END TEST skip_rpc 00:04:27.352 ************************************ 00:04:27.352 17:08:36 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:27.352 17:08:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.352 17:08:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.352 17:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:27.352 ************************************ 00:04:27.352 START TEST skip_rpc_with_json 00:04:27.352 ************************************ 00:04:27.352 17:08:36 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:27.352 17:08:36 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:27.352 17:08:36 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.352 17:08:36 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2922461 00:04:27.352 17:08:36 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.352 17:08:36 -- rpc/skip_rpc.sh@31 -- # waitforlisten 2922461 00:04:27.352 17:08:36 -- common/autotest_common.sh@817 -- # '[' -z 2922461 ']' 00:04:27.352 17:08:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.352 17:08:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:27.352 17:08:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.352 17:08:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:27.352 17:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:27.352 [2024-04-24 17:08:36.512975] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:04:27.352 [2024-04-24 17:08:36.513017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922461 ] 00:04:27.352 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.352 [2024-04-24 17:08:36.561752] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.611 [2024-04-24 17:08:36.640685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.178 17:08:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:28.178 17:08:37 -- common/autotest_common.sh@850 -- # return 0 00:04:28.178 17:08:37 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:28.178 17:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.178 17:08:37 -- common/autotest_common.sh@10 -- # set +x 00:04:28.178 [2024-04-24 17:08:37.321211] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:28.178 request: 00:04:28.178 { 00:04:28.178 "trtype": "tcp", 00:04:28.178 "method": "nvmf_get_transports", 00:04:28.178 "req_id": 1 00:04:28.178 } 00:04:28.178 Got JSON-RPC error response 00:04:28.178 response: 00:04:28.178 { 00:04:28.178 "code": -19, 00:04:28.178 "message": "No such device" 00:04:28.178 } 00:04:28.178 17:08:37 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:28.178 17:08:37 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:28.178 17:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.178 17:08:37 -- common/autotest_common.sh@10 -- # set +x 00:04:28.178 [2024-04-24 17:08:37.333306] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:28.178 17:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.178 17:08:37 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:28.178 17:08:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.178 17:08:37 -- common/autotest_common.sh@10 -- # set +x 00:04:28.436 17:08:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.436 17:08:37 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:28.436 { 00:04:28.436 "subsystems": [ 00:04:28.436 { 00:04:28.436 "subsystem": "keyring", 00:04:28.436 "config": [] 00:04:28.436 }, 00:04:28.436 { 00:04:28.436 "subsystem": "iobuf", 00:04:28.436 "config": [ 00:04:28.436 { 00:04:28.436 "method": "iobuf_set_options", 00:04:28.436 "params": { 00:04:28.436 "small_pool_count": 8192, 00:04:28.436 "large_pool_count": 1024, 00:04:28.436 "small_bufsize": 8192, 00:04:28.436 "large_bufsize": 135168 00:04:28.436 } 00:04:28.436 } 00:04:28.436 ] 00:04:28.436 }, 00:04:28.436 { 00:04:28.436 "subsystem": "sock", 00:04:28.436 "config": [ 00:04:28.436 { 00:04:28.436 "method": "sock_impl_set_options", 00:04:28.436 "params": { 00:04:28.436 "impl_name": "posix", 00:04:28.436 "recv_buf_size": 2097152, 00:04:28.436 "send_buf_size": 2097152, 00:04:28.436 "enable_recv_pipe": true, 00:04:28.436 "enable_quickack": false, 00:04:28.436 "enable_placement_id": 0, 00:04:28.436 "enable_zerocopy_send_server": true, 00:04:28.436 "enable_zerocopy_send_client": false, 00:04:28.436 "zerocopy_threshold": 0, 00:04:28.436 "tls_version": 0, 00:04:28.436 "enable_ktls": false 00:04:28.436 } 00:04:28.436 }, 00:04:28.436 { 00:04:28.436 "method": "sock_impl_set_options", 00:04:28.436 "params": { 00:04:28.436 "impl_name": "ssl", 00:04:28.436 "recv_buf_size": 4096, 00:04:28.436 "send_buf_size": 4096, 00:04:28.436 "enable_recv_pipe": true, 00:04:28.436 "enable_quickack": false, 00:04:28.436 "enable_placement_id": 0, 00:04:28.436 "enable_zerocopy_send_server": true, 00:04:28.437 "enable_zerocopy_send_client": false, 00:04:28.437 "zerocopy_threshold": 0, 00:04:28.437 "tls_version": 0, 00:04:28.437 "enable_ktls": false 00:04:28.437 } 00:04:28.437 } 00:04:28.437 ] 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "subsystem": "vmd", 00:04:28.437 "config": [] 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "subsystem": "accel", 00:04:28.437 "config": [ 00:04:28.437 { 00:04:28.437 "method": "accel_set_options", 00:04:28.437 "params": { 00:04:28.437 "small_cache_size": 128, 00:04:28.437 "large_cache_size": 16, 00:04:28.437 "task_count": 2048, 00:04:28.437 "sequence_count": 2048, 00:04:28.437 "buf_count": 2048 00:04:28.437 } 00:04:28.437 } 00:04:28.437 ] 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "subsystem": "bdev", 00:04:28.437 "config": [ 00:04:28.437 { 00:04:28.437 "method": "bdev_set_options", 00:04:28.437 "params": { 00:04:28.437 "bdev_io_pool_size": 65535, 00:04:28.437 "bdev_io_cache_size": 256, 00:04:28.437 "bdev_auto_examine": true, 00:04:28.437 "iobuf_small_cache_size": 128, 00:04:28.437 "iobuf_large_cache_size": 16 00:04:28.437 } 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "method": "bdev_raid_set_options", 00:04:28.437 "params": { 00:04:28.437 "process_window_size_kb": 1024 00:04:28.437 } 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "method": "bdev_iscsi_set_options", 00:04:28.437 "params": { 00:04:28.437 "timeout_sec": 30 00:04:28.437 } 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "method": "bdev_nvme_set_options", 00:04:28.437 "params": { 00:04:28.437 "action_on_timeout": "none", 00:04:28.437 "timeout_us": 0, 00:04:28.437 "timeout_admin_us": 0, 00:04:28.437 "keep_alive_timeout_ms": 10000, 00:04:28.437 "arbitration_burst": 0, 00:04:28.437 "low_priority_weight": 0, 00:04:28.437 "medium_priority_weight": 0, 00:04:28.437 "high_priority_weight": 0, 00:04:28.437 "nvme_adminq_poll_period_us": 10000, 00:04:28.437 "nvme_ioq_poll_period_us": 0, 00:04:28.437 "io_queue_requests": 0, 00:04:28.437 "delay_cmd_submit": true, 00:04:28.437 "transport_retry_count": 4, 00:04:28.437 "bdev_retry_count": 3, 00:04:28.437 "transport_ack_timeout": 0, 00:04:28.437 "ctrlr_loss_timeout_sec": 0, 00:04:28.437 "reconnect_delay_sec": 0, 00:04:28.437 "fast_io_fail_timeout_sec": 0, 00:04:28.437 "disable_auto_failback": false, 00:04:28.437 "generate_uuids": false, 00:04:28.437 "transport_tos": 0, 00:04:28.437 "nvme_error_stat": false, 00:04:28.437 "rdma_srq_size": 0, 00:04:28.437 "io_path_stat": false, 00:04:28.437 "allow_accel_sequence": false, 00:04:28.437 "rdma_max_cq_size": 0, 00:04:28.437 "rdma_cm_event_timeout_ms": 0, 00:04:28.437 "dhchap_digests": [ 00:04:28.437 "sha256", 00:04:28.437 "sha384", 00:04:28.437 "sha512" 00:04:28.437 ], 00:04:28.437 "dhchap_dhgroups": [ 00:04:28.437 "null", 00:04:28.437 "ffdhe2048", 00:04:28.437 "ffdhe3072", 00:04:28.437 "ffdhe4096", 00:04:28.437 "ffdhe6144", 00:04:28.437 "ffdhe8192" 00:04:28.437 ] 00:04:28.437 } 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "method": "bdev_nvme_set_hotplug", 00:04:28.437 "params": { 00:04:28.437 "period_us": 100000, 00:04:28.437 "enable": false 00:04:28.437 } 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "method": "bdev_wait_for_examine" 00:04:28.437 } 00:04:28.437 ] 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "subsystem": "scsi", 00:04:28.437 "config": null 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "subsystem": "scheduler", 00:04:28.437 "config": [ 00:04:28.437 { 00:04:28.437 "method": "framework_set_scheduler", 00:04:28.437 "params": { 00:04:28.437 "name": "static" 00:04:28.437 } 00:04:28.437 } 00:04:28.437 ] 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "subsystem": "vhost_scsi", 00:04:28.437 "config": [] 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "subsystem": "vhost_blk", 00:04:28.437 "config": [] 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "subsystem": "ublk", 00:04:28.437 "config": [] 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "subsystem": "nbd", 00:04:28.437 "config": [] 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "subsystem": "nvmf", 00:04:28.437 "config": [ 00:04:28.437 { 00:04:28.437 "method": "nvmf_set_config", 00:04:28.437 "params": { 00:04:28.437 "discovery_filter": "match_any", 00:04:28.437 "admin_cmd_passthru": { 00:04:28.437 "identify_ctrlr": false 00:04:28.437 } 00:04:28.437 } 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "method": "nvmf_set_max_subsystems", 00:04:28.437 "params": { 00:04:28.437 "max_subsystems": 1024 00:04:28.437 } 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "method": "nvmf_set_crdt", 00:04:28.437 "params": { 00:04:28.437 "crdt1": 0, 00:04:28.437 "crdt2": 0, 00:04:28.437 "crdt3": 0 00:04:28.437 } 00:04:28.437 }, 00:04:28.437 { 00:04:28.437 "method": "nvmf_create_transport", 00:04:28.437 "params": { 00:04:28.437 "trtype": "TCP", 00:04:28.437 "max_queue_depth": 128, 00:04:28.437 "max_io_qpairs_per_ctrlr": 127, 00:04:28.437 "in_capsule_data_size": 4096, 00:04:28.437 "max_io_size": 131072, 00:04:28.438 "io_unit_size": 131072, 00:04:28.438 "max_aq_depth": 128, 00:04:28.438 "num_shared_buffers": 511, 00:04:28.438 "buf_cache_size": 4294967295, 00:04:28.438 "dif_insert_or_strip": false, 00:04:28.438 "zcopy": false, 00:04:28.438 "c2h_success": true, 00:04:28.438 "sock_priority": 0, 00:04:28.438 "abort_timeout_sec": 1, 00:04:28.438 "ack_timeout": 0 00:04:28.438 } 00:04:28.438 } 00:04:28.438 ] 00:04:28.438 }, 00:04:28.438 { 00:04:28.438 "subsystem": "iscsi", 00:04:28.438 "config": [ 00:04:28.438 { 00:04:28.438 "method": "iscsi_set_options", 00:04:28.438 "params": { 00:04:28.438 "node_base": "iqn.2016-06.io.spdk", 00:04:28.438 "max_sessions": 128, 00:04:28.438 "max_connections_per_session": 2, 00:04:28.438 "max_queue_depth": 64, 00:04:28.438 "default_time2wait": 2, 00:04:28.438 "default_time2retain": 20, 00:04:28.438 "first_burst_length": 8192, 00:04:28.438 "immediate_data": true, 00:04:28.438 "allow_duplicated_isid": false, 00:04:28.438 "error_recovery_level": 0, 00:04:28.438 "nop_timeout": 60, 00:04:28.438 "nop_in_interval": 30, 00:04:28.438 "disable_chap": false, 00:04:28.438 "require_chap": false, 00:04:28.438 "mutual_chap": false, 00:04:28.438 "chap_group": 0, 00:04:28.438 "max_large_datain_per_connection": 64, 00:04:28.438 "max_r2t_per_connection": 4, 00:04:28.438 "pdu_pool_size": 36864, 00:04:28.438 "immediate_data_pool_size": 16384, 00:04:28.438 "data_out_pool_size": 2048 00:04:28.438 } 00:04:28.438 } 00:04:28.438 ] 00:04:28.438 } 00:04:28.438 ] 00:04:28.438 } 00:04:28.438 17:08:37 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:28.438 17:08:37 -- rpc/skip_rpc.sh@40 -- # killprocess 2922461 00:04:28.438 17:08:37 -- common/autotest_common.sh@936 -- # '[' -z 2922461 ']' 00:04:28.438 17:08:37 -- common/autotest_common.sh@940 -- # kill -0 2922461 00:04:28.438 17:08:37 -- common/autotest_common.sh@941 -- # uname 00:04:28.438 17:08:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:28.438 17:08:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2922461 00:04:28.438 17:08:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:28.438 17:08:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:28.438 17:08:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2922461' 00:04:28.438 killing process with pid 2922461 00:04:28.438 17:08:37 -- common/autotest_common.sh@955 -- # kill 2922461 00:04:28.438 17:08:37 -- common/autotest_common.sh@960 -- # wait 2922461 00:04:28.696 17:08:37 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2922707 00:04:28.696 17:08:37 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:28.696 17:08:37 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:33.961 17:08:42 -- rpc/skip_rpc.sh@50 -- # killprocess 2922707 00:04:33.961 17:08:42 -- common/autotest_common.sh@936 -- # '[' -z 2922707 ']' 00:04:33.961 17:08:42 -- common/autotest_common.sh@940 -- # kill -0 2922707 00:04:33.961 17:08:42 -- common/autotest_common.sh@941 -- # uname 00:04:33.961 17:08:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:33.961 17:08:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2922707 00:04:33.961 17:08:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:33.961 17:08:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:33.961 17:08:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2922707' 00:04:33.961 killing process with pid 2922707 00:04:33.961 17:08:42 -- common/autotest_common.sh@955 -- # kill 2922707 00:04:33.961 17:08:42 -- common/autotest_common.sh@960 -- # wait 2922707 00:04:34.220 17:08:43 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:34.220 17:08:43 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:34.220 00:04:34.220 real 0m6.783s 00:04:34.220 user 0m6.630s 00:04:34.220 sys 0m0.567s 00:04:34.220 17:08:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:34.220 17:08:43 -- common/autotest_common.sh@10 -- # set +x 00:04:34.220 ************************************ 00:04:34.220 END TEST skip_rpc_with_json 00:04:34.220 ************************************ 00:04:34.220 17:08:43 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:34.220 17:08:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.220 17:08:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.220 17:08:43 -- common/autotest_common.sh@10 -- # set +x 00:04:34.220 ************************************ 00:04:34.220 START TEST skip_rpc_with_delay 00:04:34.220 ************************************ 00:04:34.220 17:08:43 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:04:34.220 17:08:43 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.220 17:08:43 -- common/autotest_common.sh@638 -- # local es=0 00:04:34.220 17:08:43 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.220 17:08:43 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.220 17:08:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:34.220 17:08:43 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.220 17:08:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:34.220 17:08:43 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.220 17:08:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:34.220 17:08:43 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.220 17:08:43 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:34.220 17:08:43 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.479 [2024-04-24 17:08:43.474383] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:34.479 [2024-04-24 17:08:43.474459] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:34.479 17:08:43 -- common/autotest_common.sh@641 -- # es=1 00:04:34.479 17:08:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:34.479 17:08:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:34.479 17:08:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:34.479 00:04:34.479 real 0m0.063s 00:04:34.479 user 0m0.037s 00:04:34.479 sys 0m0.026s 00:04:34.479 17:08:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:34.479 17:08:43 -- common/autotest_common.sh@10 -- # set +x 00:04:34.479 ************************************ 00:04:34.479 END TEST skip_rpc_with_delay 00:04:34.479 ************************************ 00:04:34.479 17:08:43 -- rpc/skip_rpc.sh@77 -- # uname 00:04:34.479 17:08:43 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:34.479 17:08:43 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:34.479 17:08:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.479 17:08:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.479 17:08:43 -- common/autotest_common.sh@10 -- # set +x 00:04:34.479 ************************************ 00:04:34.479 START TEST exit_on_failed_rpc_init 00:04:34.479 ************************************ 00:04:34.479 17:08:43 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:04:34.479 17:08:43 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2923696 00:04:34.479 17:08:43 -- rpc/skip_rpc.sh@63 -- # waitforlisten 2923696 00:04:34.479 17:08:43 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.479 17:08:43 -- common/autotest_common.sh@817 -- # '[' -z 2923696 ']' 00:04:34.479 17:08:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.479 17:08:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:34.479 17:08:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.479 17:08:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:34.479 17:08:43 -- common/autotest_common.sh@10 -- # set +x 00:04:34.479 [2024-04-24 17:08:43.690239] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:04:34.479 [2024-04-24 17:08:43.690282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2923696 ] 00:04:34.479 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.737 [2024-04-24 17:08:43.744884] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.737 [2024-04-24 17:08:43.822858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.304 17:08:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:35.304 17:08:44 -- common/autotest_common.sh@850 -- # return 0 00:04:35.304 17:08:44 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.304 17:08:44 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:35.304 17:08:44 -- common/autotest_common.sh@638 -- # local es=0 00:04:35.304 17:08:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:35.304 17:08:44 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.304 17:08:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:35.304 17:08:44 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.304 17:08:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:35.304 17:08:44 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.304 17:08:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:35.304 17:08:44 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.304 17:08:44 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:35.304 17:08:44 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:35.304 [2024-04-24 17:08:44.535202] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:04:35.304 [2024-04-24 17:08:44.535248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2923920 ] 00:04:35.563 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.563 [2024-04-24 17:08:44.588789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.563 [2024-04-24 17:08:44.657500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.563 [2024-04-24 17:08:44.657563] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:35.563 [2024-04-24 17:08:44.657572] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:35.563 [2024-04-24 17:08:44.657578] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:35.563 17:08:44 -- common/autotest_common.sh@641 -- # es=234 00:04:35.563 17:08:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:35.563 17:08:44 -- common/autotest_common.sh@650 -- # es=106 00:04:35.563 17:08:44 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:35.563 17:08:44 -- common/autotest_common.sh@658 -- # es=1 00:04:35.563 17:08:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:35.563 17:08:44 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:35.563 17:08:44 -- rpc/skip_rpc.sh@70 -- # killprocess 2923696 00:04:35.563 17:08:44 -- common/autotest_common.sh@936 -- # '[' -z 2923696 ']' 00:04:35.563 17:08:44 -- common/autotest_common.sh@940 -- # kill -0 2923696 00:04:35.563 17:08:44 -- common/autotest_common.sh@941 -- # uname 00:04:35.563 17:08:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:35.563 17:08:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2923696 00:04:35.563 17:08:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:35.563 17:08:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:35.563 17:08:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2923696' 00:04:35.563 killing process with pid 2923696 00:04:35.563 17:08:44 -- common/autotest_common.sh@955 -- # kill 2923696 00:04:35.563 17:08:44 -- common/autotest_common.sh@960 -- # wait 2923696 00:04:36.130 00:04:36.130 real 0m1.472s 00:04:36.130 user 0m1.714s 00:04:36.130 sys 0m0.372s 00:04:36.130 17:08:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.130 17:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:36.130 ************************************ 00:04:36.130 END TEST exit_on_failed_rpc_init 00:04:36.130 ************************************ 00:04:36.130 17:08:45 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:36.130 00:04:36.130 real 0m14.437s 00:04:36.130 user 0m13.822s 00:04:36.130 sys 0m1.635s 00:04:36.130 17:08:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.130 17:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:36.130 ************************************ 00:04:36.130 END TEST skip_rpc 00:04:36.130 ************************************ 00:04:36.130 17:08:45 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:36.130 17:08:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.130 17:08:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.130 17:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:36.130 ************************************ 00:04:36.130 START TEST rpc_client 00:04:36.130 ************************************ 00:04:36.130 17:08:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:36.388 * Looking for test storage... 00:04:36.388 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:36.389 17:08:45 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:36.389 OK 00:04:36.389 17:08:45 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:36.389 00:04:36.389 real 0m0.120s 00:04:36.389 user 0m0.046s 00:04:36.389 sys 0m0.082s 00:04:36.389 17:08:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.389 17:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 ************************************ 00:04:36.389 END TEST rpc_client 00:04:36.389 ************************************ 00:04:36.389 17:08:45 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:36.389 17:08:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.389 17:08:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.389 17:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 ************************************ 00:04:36.389 START TEST json_config 00:04:36.389 ************************************ 00:04:36.389 17:08:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:36.648 17:08:45 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:36.648 17:08:45 -- nvmf/common.sh@7 -- # uname -s 00:04:36.648 17:08:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.648 17:08:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.648 17:08:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.648 17:08:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.648 17:08:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.648 17:08:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.648 17:08:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.648 17:08:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.648 17:08:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.648 17:08:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.648 17:08:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:04:36.648 17:08:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:04:36.648 17:08:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.648 17:08:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.648 17:08:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.648 17:08:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.648 17:08:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:36.648 17:08:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.648 17:08:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.648 17:08:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.648 17:08:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.648 17:08:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.648 17:08:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.648 17:08:45 -- paths/export.sh@5 -- # export PATH 00:04:36.648 17:08:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.648 17:08:45 -- nvmf/common.sh@47 -- # : 0 00:04:36.648 17:08:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:36.648 17:08:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:36.648 17:08:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.648 17:08:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.648 17:08:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.648 17:08:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:36.648 17:08:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:36.648 17:08:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:36.648 17:08:45 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:36.648 17:08:45 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:36.648 17:08:45 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:36.648 17:08:45 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:36.648 17:08:45 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:36.648 17:08:45 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:36.648 17:08:45 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:36.648 17:08:45 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:36.648 17:08:45 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:36.648 17:08:45 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:36.648 17:08:45 -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:36.648 17:08:45 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:04:36.648 17:08:45 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:36.648 17:08:45 -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:36.648 17:08:45 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.648 17:08:45 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:36.648 INFO: JSON configuration test init 00:04:36.648 17:08:45 -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:36.648 17:08:45 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:36.648 17:08:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:36.648 17:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:36.648 17:08:45 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:36.648 17:08:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:36.648 17:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:36.648 17:08:45 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:36.648 17:08:45 -- json_config/common.sh@9 -- # local app=target 00:04:36.648 17:08:45 -- json_config/common.sh@10 -- # shift 00:04:36.648 17:08:45 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.648 17:08:45 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.648 17:08:45 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.648 17:08:45 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.648 17:08:45 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.648 17:08:45 -- json_config/common.sh@22 -- # app_pid["$app"]=2924278 00:04:36.648 17:08:45 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.648 Waiting for target to run... 00:04:36.648 17:08:45 -- json_config/common.sh@25 -- # waitforlisten 2924278 /var/tmp/spdk_tgt.sock 00:04:36.648 17:08:45 -- common/autotest_common.sh@817 -- # '[' -z 2924278 ']' 00:04:36.648 17:08:45 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:36.648 17:08:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.648 17:08:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:36.648 17:08:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.648 17:08:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:36.648 17:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:36.648 [2024-04-24 17:08:45.755129] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:04:36.648 [2024-04-24 17:08:45.755174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2924278 ] 00:04:36.648 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.214 [2024-04-24 17:08:46.189756] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.214 [2024-04-24 17:08:46.273900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.472 17:08:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:37.472 17:08:46 -- common/autotest_common.sh@850 -- # return 0 00:04:37.472 17:08:46 -- json_config/common.sh@26 -- # echo '' 00:04:37.472 00:04:37.472 17:08:46 -- json_config/json_config.sh@269 -- # create_accel_config 00:04:37.472 17:08:46 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:37.472 17:08:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:37.472 17:08:46 -- common/autotest_common.sh@10 -- # set +x 00:04:37.472 17:08:46 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:37.472 17:08:46 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:37.472 17:08:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:37.472 17:08:46 -- common/autotest_common.sh@10 -- # set +x 00:04:37.472 17:08:46 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:37.472 17:08:46 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:37.472 17:08:46 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:40.753 17:08:49 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:40.753 17:08:49 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:40.753 17:08:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:40.753 17:08:49 -- common/autotest_common.sh@10 -- # set +x 00:04:40.753 17:08:49 -- json_config/json_config.sh@45 -- # local ret=0 00:04:40.753 17:08:49 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:40.753 17:08:49 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:40.753 17:08:49 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:40.753 17:08:49 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:40.753 17:08:49 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:40.753 17:08:49 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:40.753 17:08:49 -- json_config/json_config.sh@48 -- # local get_types 00:04:40.753 17:08:49 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:40.753 17:08:49 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:40.753 17:08:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:40.753 17:08:49 -- common/autotest_common.sh@10 -- # set +x 00:04:40.753 17:08:49 -- json_config/json_config.sh@55 -- # return 0 00:04:40.753 17:08:49 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:40.753 17:08:49 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:40.753 17:08:49 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:40.753 17:08:49 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:40.753 17:08:49 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:40.753 17:08:49 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:40.753 17:08:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:40.753 17:08:49 -- common/autotest_common.sh@10 -- # set +x 00:04:40.753 17:08:49 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:40.753 17:08:49 -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:04:40.753 17:08:49 -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:04:40.753 17:08:49 -- json_config/json_config.sh@234 -- # nvmftestinit 00:04:40.753 17:08:49 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:04:40.753 17:08:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:40.753 17:08:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:04:40.753 17:08:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:04:40.753 17:08:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:04:40.753 17:08:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:40.753 17:08:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:04:40.753 17:08:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:40.753 17:08:49 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:04:40.753 17:08:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:04:40.753 17:08:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:04:40.753 17:08:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.018 17:08:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:04:46.018 17:08:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:04:46.018 17:08:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:04:46.018 17:08:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:04:46.018 17:08:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:04:46.018 17:08:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:04:46.018 17:08:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:04:46.018 17:08:55 -- nvmf/common.sh@295 -- # net_devs=() 00:04:46.018 17:08:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:04:46.018 17:08:55 -- nvmf/common.sh@296 -- # e810=() 00:04:46.018 17:08:55 -- nvmf/common.sh@296 -- # local -ga e810 00:04:46.018 17:08:55 -- nvmf/common.sh@297 -- # x722=() 00:04:46.019 17:08:55 -- nvmf/common.sh@297 -- # local -ga x722 00:04:46.019 17:08:55 -- nvmf/common.sh@298 -- # mlx=() 00:04:46.019 17:08:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:04:46.019 17:08:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:46.019 17:08:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:46.019 17:08:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:46.019 17:08:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:46.019 17:08:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:46.019 17:08:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:46.019 17:08:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:46.019 17:08:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:46.019 17:08:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:46.019 17:08:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:46.019 17:08:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:46.019 17:08:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:04:46.019 17:08:55 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:04:46.019 17:08:55 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:04:46.019 17:08:55 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:04:46.019 17:08:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:04:46.019 17:08:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:04:46.019 17:08:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:04:46.019 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:04:46.019 17:08:55 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:04:46.019 17:08:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:04:46.019 17:08:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:04:46.019 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:04:46.019 17:08:55 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:04:46.019 17:08:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:04:46.019 17:08:55 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:04:46.019 17:08:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:46.019 17:08:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:04:46.019 17:08:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:46.019 17:08:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:04:46.019 Found net devices under 0000:da:00.0: mlx_0_0 00:04:46.019 17:08:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:04:46.019 17:08:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:04:46.019 17:08:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:46.019 17:08:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:04:46.019 17:08:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:46.019 17:08:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:04:46.019 Found net devices under 0000:da:00.1: mlx_0_1 00:04:46.019 17:08:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:04:46.019 17:08:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:04:46.019 17:08:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:04:46.019 17:08:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@409 -- # rdma_device_init 00:04:46.019 17:08:55 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:04:46.019 17:08:55 -- nvmf/common.sh@58 -- # uname 00:04:46.019 17:08:55 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:04:46.019 17:08:55 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:04:46.019 17:08:55 -- nvmf/common.sh@63 -- # modprobe ib_core 00:04:46.019 17:08:55 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:04:46.019 17:08:55 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:04:46.019 17:08:55 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:04:46.019 17:08:55 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:04:46.019 17:08:55 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:04:46.019 17:08:55 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:04:46.019 17:08:55 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:04:46.019 17:08:55 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:04:46.019 17:08:55 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:46.019 17:08:55 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:04:46.019 17:08:55 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:04:46.019 17:08:55 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:46.019 17:08:55 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:04:46.019 17:08:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:04:46.019 17:08:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.019 17:08:55 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:04:46.019 17:08:55 -- nvmf/common.sh@105 -- # continue 2 00:04:46.019 17:08:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:04:46.019 17:08:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.019 17:08:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.019 17:08:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:04:46.019 17:08:55 -- nvmf/common.sh@105 -- # continue 2 00:04:46.019 17:08:55 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:04:46.019 17:08:55 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:04:46.019 17:08:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:04:46.019 17:08:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:04:46.019 17:08:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:04:46.019 17:08:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:04:46.019 17:08:55 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:04:46.019 17:08:55 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:04:46.019 17:08:55 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:04:46.019 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:46.019 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:04:46.019 altname enp218s0f0np0 00:04:46.019 altname ens818f0np0 00:04:46.019 inet 192.168.100.8/24 scope global mlx_0_0 00:04:46.019 valid_lft forever preferred_lft forever 00:04:46.278 17:08:55 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:04:46.278 17:08:55 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:04:46.278 17:08:55 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:04:46.278 17:08:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:04:46.278 17:08:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:04:46.278 17:08:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:04:46.278 17:08:55 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:04:46.278 17:08:55 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:04:46.278 17:08:55 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:04:46.278 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:46.278 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:04:46.278 altname enp218s0f1np1 00:04:46.278 altname ens818f1np1 00:04:46.278 inet 192.168.100.9/24 scope global mlx_0_1 00:04:46.278 valid_lft forever preferred_lft forever 00:04:46.278 17:08:55 -- nvmf/common.sh@411 -- # return 0 00:04:46.278 17:08:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:04:46.278 17:08:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:46.278 17:08:55 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:04:46.278 17:08:55 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:04:46.278 17:08:55 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:04:46.278 17:08:55 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:46.278 17:08:55 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:04:46.278 17:08:55 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:04:46.278 17:08:55 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:46.278 17:08:55 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:04:46.278 17:08:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:04:46.278 17:08:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.278 17:08:55 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:46.278 17:08:55 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:04:46.278 17:08:55 -- nvmf/common.sh@105 -- # continue 2 00:04:46.278 17:08:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:04:46.278 17:08:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.278 17:08:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:46.278 17:08:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.278 17:08:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:46.278 17:08:55 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:04:46.278 17:08:55 -- nvmf/common.sh@105 -- # continue 2 00:04:46.278 17:08:55 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:04:46.278 17:08:55 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:04:46.278 17:08:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:04:46.278 17:08:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:04:46.278 17:08:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:04:46.278 17:08:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:04:46.278 17:08:55 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:04:46.278 17:08:55 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:04:46.278 17:08:55 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:04:46.278 17:08:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:04:46.278 17:08:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:04:46.278 17:08:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:04:46.278 17:08:55 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:04:46.278 192.168.100.9' 00:04:46.278 17:08:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:04:46.278 192.168.100.9' 00:04:46.278 17:08:55 -- nvmf/common.sh@446 -- # head -n 1 00:04:46.278 17:08:55 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:04:46.278 17:08:55 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:04:46.278 192.168.100.9' 00:04:46.278 17:08:55 -- nvmf/common.sh@447 -- # tail -n +2 00:04:46.278 17:08:55 -- nvmf/common.sh@447 -- # head -n 1 00:04:46.278 17:08:55 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:04:46.278 17:08:55 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:04:46.278 17:08:55 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:46.278 17:08:55 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:04:46.278 17:08:55 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:04:46.278 17:08:55 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:04:46.278 17:08:55 -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:04:46.278 17:08:55 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.278 17:08:55 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.536 MallocForNvmf0 00:04:46.536 17:08:55 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:46.536 17:08:55 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:46.536 MallocForNvmf1 00:04:46.536 17:08:55 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:04:46.536 17:08:55 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:04:46.795 [2024-04-24 17:08:55.855410] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:04:46.795 [2024-04-24 17:08:55.883072] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x55ed20/0x58bd00) succeed. 00:04:46.795 [2024-04-24 17:08:55.895012] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x560f10/0x5ebd00) succeed. 00:04:46.795 17:08:55 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:46.795 17:08:55 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:47.053 17:08:56 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.053 17:08:56 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.053 17:08:56 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.053 17:08:56 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.311 17:08:56 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:47.311 17:08:56 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:47.570 [2024-04-24 17:08:56.623254] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:47.570 17:08:56 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:47.570 17:08:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:47.570 17:08:56 -- common/autotest_common.sh@10 -- # set +x 00:04:47.570 17:08:56 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:47.570 17:08:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:47.570 17:08:56 -- common/autotest_common.sh@10 -- # set +x 00:04:47.570 17:08:56 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:47.570 17:08:56 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.570 17:08:56 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.828 MallocBdevForConfigChangeCheck 00:04:47.828 17:08:56 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:47.828 17:08:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:47.828 17:08:56 -- common/autotest_common.sh@10 -- # set +x 00:04:47.828 17:08:56 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:47.828 17:08:56 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.086 17:08:57 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:48.086 INFO: shutting down applications... 00:04:48.086 17:08:57 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:48.086 17:08:57 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:48.086 17:08:57 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:48.086 17:08:57 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:50.617 Calling clear_iscsi_subsystem 00:04:50.617 Calling clear_nvmf_subsystem 00:04:50.617 Calling clear_nbd_subsystem 00:04:50.617 Calling clear_ublk_subsystem 00:04:50.617 Calling clear_vhost_blk_subsystem 00:04:50.617 Calling clear_vhost_scsi_subsystem 00:04:50.617 Calling clear_bdev_subsystem 00:04:50.617 17:08:59 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:04:50.617 17:08:59 -- json_config/json_config.sh@343 -- # count=100 00:04:50.617 17:08:59 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:50.617 17:08:59 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:50.617 17:08:59 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.617 17:08:59 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:50.617 17:08:59 -- json_config/json_config.sh@345 -- # break 00:04:50.617 17:08:59 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:50.617 17:08:59 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:50.617 17:08:59 -- json_config/common.sh@31 -- # local app=target 00:04:50.617 17:08:59 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:50.617 17:08:59 -- json_config/common.sh@35 -- # [[ -n 2924278 ]] 00:04:50.617 17:08:59 -- json_config/common.sh@38 -- # kill -SIGINT 2924278 00:04:50.617 17:08:59 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:50.617 17:08:59 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.617 17:08:59 -- json_config/common.sh@41 -- # kill -0 2924278 00:04:50.617 17:08:59 -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.182 17:09:00 -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.182 17:09:00 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.182 17:09:00 -- json_config/common.sh@41 -- # kill -0 2924278 00:04:51.182 17:09:00 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:51.182 17:09:00 -- json_config/common.sh@43 -- # break 00:04:51.182 17:09:00 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:51.182 17:09:00 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:51.182 SPDK target shutdown done 00:04:51.182 17:09:00 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:51.182 INFO: relaunching applications... 00:04:51.182 17:09:00 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.182 17:09:00 -- json_config/common.sh@9 -- # local app=target 00:04:51.182 17:09:00 -- json_config/common.sh@10 -- # shift 00:04:51.182 17:09:00 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.182 17:09:00 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.182 17:09:00 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.182 17:09:00 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.182 17:09:00 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.182 17:09:00 -- json_config/common.sh@22 -- # app_pid["$app"]=2928814 00:04:51.182 17:09:00 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.182 17:09:00 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.182 Waiting for target to run... 00:04:51.182 17:09:00 -- json_config/common.sh@25 -- # waitforlisten 2928814 /var/tmp/spdk_tgt.sock 00:04:51.182 17:09:00 -- common/autotest_common.sh@817 -- # '[' -z 2928814 ']' 00:04:51.182 17:09:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.182 17:09:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:51.182 17:09:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.182 17:09:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:51.182 17:09:00 -- common/autotest_common.sh@10 -- # set +x 00:04:51.182 [2024-04-24 17:09:00.228340] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:04:51.183 [2024-04-24 17:09:00.228392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2928814 ] 00:04:51.183 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.441 [2024-04-24 17:09:00.664857] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.735 [2024-04-24 17:09:00.756986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.047 [2024-04-24 17:09:03.780180] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x28b6240/0x283c200) succeed. 00:04:55.047 [2024-04-24 17:09:03.791473] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x28b5230/0x271c0c0) succeed. 00:04:55.047 [2024-04-24 17:09:03.845469] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:55.305 17:09:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:55.305 17:09:04 -- common/autotest_common.sh@850 -- # return 0 00:04:55.305 17:09:04 -- json_config/common.sh@26 -- # echo '' 00:04:55.305 00:04:55.305 17:09:04 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:55.305 17:09:04 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:55.305 INFO: Checking if target configuration is the same... 00:04:55.305 17:09:04 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.305 17:09:04 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:55.305 17:09:04 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.305 + '[' 2 -ne 2 ']' 00:04:55.305 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:55.305 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:55.305 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:55.305 +++ basename /dev/fd/62 00:04:55.305 ++ mktemp /tmp/62.XXX 00:04:55.305 + tmp_file_1=/tmp/62.EST 00:04:55.305 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.305 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:55.305 + tmp_file_2=/tmp/spdk_tgt_config.json.yDz 00:04:55.305 + ret=0 00:04:55.305 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.563 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.563 + diff -u /tmp/62.EST /tmp/spdk_tgt_config.json.yDz 00:04:55.563 + echo 'INFO: JSON config files are the same' 00:04:55.563 INFO: JSON config files are the same 00:04:55.563 + rm /tmp/62.EST /tmp/spdk_tgt_config.json.yDz 00:04:55.563 + exit 0 00:04:55.563 17:09:04 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:55.563 17:09:04 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:55.563 INFO: changing configuration and checking if this can be detected... 00:04:55.563 17:09:04 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:55.563 17:09:04 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:55.821 17:09:04 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.821 17:09:04 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:55.821 17:09:04 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.821 + '[' 2 -ne 2 ']' 00:04:55.821 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:55.821 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:55.821 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:55.821 +++ basename /dev/fd/62 00:04:55.821 ++ mktemp /tmp/62.XXX 00:04:55.821 + tmp_file_1=/tmp/62.Ye1 00:04:55.821 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.821 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:55.821 + tmp_file_2=/tmp/spdk_tgt_config.json.fuv 00:04:55.821 + ret=0 00:04:55.821 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.079 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.079 + diff -u /tmp/62.Ye1 /tmp/spdk_tgt_config.json.fuv 00:04:56.079 + ret=1 00:04:56.079 + echo '=== Start of file: /tmp/62.Ye1 ===' 00:04:56.079 + cat /tmp/62.Ye1 00:04:56.079 + echo '=== End of file: /tmp/62.Ye1 ===' 00:04:56.079 + echo '' 00:04:56.079 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fuv ===' 00:04:56.079 + cat /tmp/spdk_tgt_config.json.fuv 00:04:56.079 + echo '=== End of file: /tmp/spdk_tgt_config.json.fuv ===' 00:04:56.079 + echo '' 00:04:56.079 + rm /tmp/62.Ye1 /tmp/spdk_tgt_config.json.fuv 00:04:56.079 + exit 1 00:04:56.079 17:09:05 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:56.079 INFO: configuration change detected. 00:04:56.079 17:09:05 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:56.079 17:09:05 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:56.079 17:09:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:56.079 17:09:05 -- common/autotest_common.sh@10 -- # set +x 00:04:56.079 17:09:05 -- json_config/json_config.sh@307 -- # local ret=0 00:04:56.079 17:09:05 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:56.079 17:09:05 -- json_config/json_config.sh@317 -- # [[ -n 2928814 ]] 00:04:56.079 17:09:05 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:56.079 17:09:05 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:56.079 17:09:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:56.079 17:09:05 -- common/autotest_common.sh@10 -- # set +x 00:04:56.079 17:09:05 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:56.079 17:09:05 -- json_config/json_config.sh@193 -- # uname -s 00:04:56.079 17:09:05 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:56.079 17:09:05 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:56.079 17:09:05 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:56.079 17:09:05 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:56.079 17:09:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:56.079 17:09:05 -- common/autotest_common.sh@10 -- # set +x 00:04:56.079 17:09:05 -- json_config/json_config.sh@323 -- # killprocess 2928814 00:04:56.079 17:09:05 -- common/autotest_common.sh@936 -- # '[' -z 2928814 ']' 00:04:56.079 17:09:05 -- common/autotest_common.sh@940 -- # kill -0 2928814 00:04:56.079 17:09:05 -- common/autotest_common.sh@941 -- # uname 00:04:56.079 17:09:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:56.079 17:09:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2928814 00:04:56.338 17:09:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:56.338 17:09:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:56.338 17:09:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2928814' 00:04:56.338 killing process with pid 2928814 00:04:56.338 17:09:05 -- common/autotest_common.sh@955 -- # kill 2928814 00:04:56.338 17:09:05 -- common/autotest_common.sh@960 -- # wait 2928814 00:04:58.237 17:09:07 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.238 17:09:07 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:58.238 17:09:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:58.238 17:09:07 -- common/autotest_common.sh@10 -- # set +x 00:04:58.496 17:09:07 -- json_config/json_config.sh@328 -- # return 0 00:04:58.496 17:09:07 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:58.496 INFO: Success 00:04:58.496 17:09:07 -- json_config/json_config.sh@1 -- # nvmftestfini 00:04:58.496 17:09:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:04:58.496 17:09:07 -- nvmf/common.sh@117 -- # sync 00:04:58.496 17:09:07 -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:04:58.496 17:09:07 -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:04:58.496 17:09:07 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:04:58.496 17:09:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:04:58.496 17:09:07 -- nvmf/common.sh@484 -- # [[ '' == \t\c\p ]] 00:04:58.496 00:04:58.496 real 0m21.922s 00:04:58.496 user 0m23.921s 00:04:58.496 sys 0m6.222s 00:04:58.496 17:09:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.496 17:09:07 -- common/autotest_common.sh@10 -- # set +x 00:04:58.496 ************************************ 00:04:58.496 END TEST json_config 00:04:58.496 ************************************ 00:04:58.496 17:09:07 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:58.496 17:09:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.496 17:09:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.496 17:09:07 -- common/autotest_common.sh@10 -- # set +x 00:04:58.496 ************************************ 00:04:58.496 START TEST json_config_extra_key 00:04:58.496 ************************************ 00:04:58.496 17:09:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:58.496 17:09:07 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.496 17:09:07 -- nvmf/common.sh@7 -- # uname -s 00:04:58.496 17:09:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.496 17:09:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.496 17:09:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.496 17:09:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.496 17:09:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.755 17:09:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.755 17:09:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.755 17:09:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.755 17:09:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.755 17:09:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.755 17:09:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:04:58.755 17:09:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:04:58.755 17:09:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.755 17:09:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.755 17:09:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.755 17:09:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.755 17:09:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:58.755 17:09:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.755 17:09:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.755 17:09:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.755 17:09:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.755 17:09:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.755 17:09:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.755 17:09:07 -- paths/export.sh@5 -- # export PATH 00:04:58.755 17:09:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.755 17:09:07 -- nvmf/common.sh@47 -- # : 0 00:04:58.755 17:09:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:58.755 17:09:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:58.755 17:09:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.755 17:09:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.755 17:09:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.755 17:09:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:58.755 17:09:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:58.755 17:09:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:58.755 17:09:07 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:58.755 17:09:07 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:58.755 17:09:07 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:58.755 17:09:07 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:58.755 17:09:07 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:58.755 17:09:07 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:58.755 17:09:07 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:58.756 17:09:07 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:58.756 17:09:07 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:58.756 17:09:07 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.756 17:09:07 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:58.756 INFO: launching applications... 00:04:58.756 17:09:07 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.756 17:09:07 -- json_config/common.sh@9 -- # local app=target 00:04:58.756 17:09:07 -- json_config/common.sh@10 -- # shift 00:04:58.756 17:09:07 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.756 17:09:07 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.756 17:09:07 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.756 17:09:07 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.756 17:09:07 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.756 17:09:07 -- json_config/common.sh@22 -- # app_pid["$app"]=2930419 00:04:58.756 17:09:07 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.756 Waiting for target to run... 00:04:58.756 17:09:07 -- json_config/common.sh@25 -- # waitforlisten 2930419 /var/tmp/spdk_tgt.sock 00:04:58.756 17:09:07 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.756 17:09:07 -- common/autotest_common.sh@817 -- # '[' -z 2930419 ']' 00:04:58.756 17:09:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.756 17:09:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:58.756 17:09:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.756 17:09:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:58.756 17:09:07 -- common/autotest_common.sh@10 -- # set +x 00:04:58.756 [2024-04-24 17:09:07.814061] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:04:58.756 [2024-04-24 17:09:07.814115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2930419 ] 00:04:58.756 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.014 [2024-04-24 17:09:08.092025] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.014 [2024-04-24 17:09:08.158733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.581 17:09:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:59.581 17:09:08 -- common/autotest_common.sh@850 -- # return 0 00:04:59.581 17:09:08 -- json_config/common.sh@26 -- # echo '' 00:04:59.581 00:04:59.581 17:09:08 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:59.581 INFO: shutting down applications... 00:04:59.581 17:09:08 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:59.581 17:09:08 -- json_config/common.sh@31 -- # local app=target 00:04:59.581 17:09:08 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.581 17:09:08 -- json_config/common.sh@35 -- # [[ -n 2930419 ]] 00:04:59.581 17:09:08 -- json_config/common.sh@38 -- # kill -SIGINT 2930419 00:04:59.581 17:09:08 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.581 17:09:08 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.581 17:09:08 -- json_config/common.sh@41 -- # kill -0 2930419 00:04:59.581 17:09:08 -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.149 17:09:09 -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.149 17:09:09 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.149 17:09:09 -- json_config/common.sh@41 -- # kill -0 2930419 00:05:00.149 17:09:09 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:00.149 17:09:09 -- json_config/common.sh@43 -- # break 00:05:00.149 17:09:09 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:00.149 17:09:09 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:00.149 SPDK target shutdown done 00:05:00.149 17:09:09 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:00.149 Success 00:05:00.149 00:05:00.149 real 0m1.442s 00:05:00.149 user 0m1.258s 00:05:00.149 sys 0m0.359s 00:05:00.149 17:09:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.149 17:09:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.149 ************************************ 00:05:00.149 END TEST json_config_extra_key 00:05:00.149 ************************************ 00:05:00.149 17:09:09 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.149 17:09:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.149 17:09:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.149 17:09:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.149 ************************************ 00:05:00.149 START TEST alias_rpc 00:05:00.149 ************************************ 00:05:00.149 17:09:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.149 * Looking for test storage... 00:05:00.149 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:00.149 17:09:09 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:00.149 17:09:09 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.149 17:09:09 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2931026 00:05:00.149 17:09:09 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2931026 00:05:00.149 17:09:09 -- common/autotest_common.sh@817 -- # '[' -z 2931026 ']' 00:05:00.149 17:09:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.149 17:09:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:00.149 17:09:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.149 17:09:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:00.149 17:09:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.149 [2024-04-24 17:09:09.396433] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:00.149 [2024-04-24 17:09:09.396485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2931026 ] 00:05:00.409 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.409 [2024-04-24 17:09:09.450405] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.409 [2024-04-24 17:09:09.529424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.983 17:09:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:00.984 17:09:10 -- common/autotest_common.sh@850 -- # return 0 00:05:00.984 17:09:10 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:01.248 17:09:10 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2931026 00:05:01.248 17:09:10 -- common/autotest_common.sh@936 -- # '[' -z 2931026 ']' 00:05:01.248 17:09:10 -- common/autotest_common.sh@940 -- # kill -0 2931026 00:05:01.248 17:09:10 -- common/autotest_common.sh@941 -- # uname 00:05:01.248 17:09:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:01.248 17:09:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2931026 00:05:01.248 17:09:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:01.248 17:09:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:01.248 17:09:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2931026' 00:05:01.248 killing process with pid 2931026 00:05:01.248 17:09:10 -- common/autotest_common.sh@955 -- # kill 2931026 00:05:01.248 17:09:10 -- common/autotest_common.sh@960 -- # wait 2931026 00:05:01.815 00:05:01.815 real 0m1.495s 00:05:01.815 user 0m1.645s 00:05:01.815 sys 0m0.374s 00:05:01.815 17:09:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:01.815 17:09:10 -- common/autotest_common.sh@10 -- # set +x 00:05:01.815 ************************************ 00:05:01.815 END TEST alias_rpc 00:05:01.815 ************************************ 00:05:01.815 17:09:10 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:01.815 17:09:10 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.815 17:09:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.815 17:09:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.815 17:09:10 -- common/autotest_common.sh@10 -- # set +x 00:05:01.815 ************************************ 00:05:01.815 START TEST spdkcli_tcp 00:05:01.815 ************************************ 00:05:01.815 17:09:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.815 * Looking for test storage... 00:05:01.815 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:01.815 17:09:11 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:01.815 17:09:11 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:01.815 17:09:11 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:01.815 17:09:11 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:01.815 17:09:11 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:01.815 17:09:11 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:01.815 17:09:11 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:01.815 17:09:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:01.815 17:09:11 -- common/autotest_common.sh@10 -- # set +x 00:05:01.815 17:09:11 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2931388 00:05:01.815 17:09:11 -- spdkcli/tcp.sh@27 -- # waitforlisten 2931388 00:05:01.815 17:09:11 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:01.815 17:09:11 -- common/autotest_common.sh@817 -- # '[' -z 2931388 ']' 00:05:01.815 17:09:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.815 17:09:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:01.815 17:09:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.815 17:09:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:01.815 17:09:11 -- common/autotest_common.sh@10 -- # set +x 00:05:01.815 [2024-04-24 17:09:11.058186] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:01.815 [2024-04-24 17:09:11.058234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2931388 ] 00:05:02.074 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.074 [2024-04-24 17:09:11.111845] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.074 [2024-04-24 17:09:11.184994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.074 [2024-04-24 17:09:11.184996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.640 17:09:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:02.640 17:09:11 -- common/autotest_common.sh@850 -- # return 0 00:05:02.640 17:09:11 -- spdkcli/tcp.sh@31 -- # socat_pid=2931618 00:05:02.640 17:09:11 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:02.640 17:09:11 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:02.899 [ 00:05:02.899 "bdev_malloc_delete", 00:05:02.899 "bdev_malloc_create", 00:05:02.899 "bdev_null_resize", 00:05:02.899 "bdev_null_delete", 00:05:02.899 "bdev_null_create", 00:05:02.899 "bdev_nvme_cuse_unregister", 00:05:02.899 "bdev_nvme_cuse_register", 00:05:02.899 "bdev_opal_new_user", 00:05:02.899 "bdev_opal_set_lock_state", 00:05:02.899 "bdev_opal_delete", 00:05:02.899 "bdev_opal_get_info", 00:05:02.899 "bdev_opal_create", 00:05:02.899 "bdev_nvme_opal_revert", 00:05:02.899 "bdev_nvme_opal_init", 00:05:02.899 "bdev_nvme_send_cmd", 00:05:02.899 "bdev_nvme_get_path_iostat", 00:05:02.899 "bdev_nvme_get_mdns_discovery_info", 00:05:02.899 "bdev_nvme_stop_mdns_discovery", 00:05:02.899 "bdev_nvme_start_mdns_discovery", 00:05:02.899 "bdev_nvme_set_multipath_policy", 00:05:02.899 "bdev_nvme_set_preferred_path", 00:05:02.899 "bdev_nvme_get_io_paths", 00:05:02.899 "bdev_nvme_remove_error_injection", 00:05:02.899 "bdev_nvme_add_error_injection", 00:05:02.899 "bdev_nvme_get_discovery_info", 00:05:02.899 "bdev_nvme_stop_discovery", 00:05:02.899 "bdev_nvme_start_discovery", 00:05:02.899 "bdev_nvme_get_controller_health_info", 00:05:02.899 "bdev_nvme_disable_controller", 00:05:02.899 "bdev_nvme_enable_controller", 00:05:02.899 "bdev_nvme_reset_controller", 00:05:02.899 "bdev_nvme_get_transport_statistics", 00:05:02.899 "bdev_nvme_apply_firmware", 00:05:02.899 "bdev_nvme_detach_controller", 00:05:02.899 "bdev_nvme_get_controllers", 00:05:02.899 "bdev_nvme_attach_controller", 00:05:02.899 "bdev_nvme_set_hotplug", 00:05:02.899 "bdev_nvme_set_options", 00:05:02.899 "bdev_passthru_delete", 00:05:02.899 "bdev_passthru_create", 00:05:02.899 "bdev_lvol_grow_lvstore", 00:05:02.899 "bdev_lvol_get_lvols", 00:05:02.899 "bdev_lvol_get_lvstores", 00:05:02.899 "bdev_lvol_delete", 00:05:02.899 "bdev_lvol_set_read_only", 00:05:02.899 "bdev_lvol_resize", 00:05:02.899 "bdev_lvol_decouple_parent", 00:05:02.899 "bdev_lvol_inflate", 00:05:02.899 "bdev_lvol_rename", 00:05:02.899 "bdev_lvol_clone_bdev", 00:05:02.899 "bdev_lvol_clone", 00:05:02.899 "bdev_lvol_snapshot", 00:05:02.899 "bdev_lvol_create", 00:05:02.899 "bdev_lvol_delete_lvstore", 00:05:02.899 "bdev_lvol_rename_lvstore", 00:05:02.899 "bdev_lvol_create_lvstore", 00:05:02.899 "bdev_raid_set_options", 00:05:02.899 "bdev_raid_remove_base_bdev", 00:05:02.899 "bdev_raid_add_base_bdev", 00:05:02.899 "bdev_raid_delete", 00:05:02.899 "bdev_raid_create", 00:05:02.899 "bdev_raid_get_bdevs", 00:05:02.899 "bdev_error_inject_error", 00:05:02.899 "bdev_error_delete", 00:05:02.899 "bdev_error_create", 00:05:02.899 "bdev_split_delete", 00:05:02.899 "bdev_split_create", 00:05:02.899 "bdev_delay_delete", 00:05:02.899 "bdev_delay_create", 00:05:02.899 "bdev_delay_update_latency", 00:05:02.899 "bdev_zone_block_delete", 00:05:02.899 "bdev_zone_block_create", 00:05:02.899 "blobfs_create", 00:05:02.899 "blobfs_detect", 00:05:02.899 "blobfs_set_cache_size", 00:05:02.899 "bdev_aio_delete", 00:05:02.899 "bdev_aio_rescan", 00:05:02.899 "bdev_aio_create", 00:05:02.899 "bdev_ftl_set_property", 00:05:02.899 "bdev_ftl_get_properties", 00:05:02.899 "bdev_ftl_get_stats", 00:05:02.899 "bdev_ftl_unmap", 00:05:02.899 "bdev_ftl_unload", 00:05:02.899 "bdev_ftl_delete", 00:05:02.899 "bdev_ftl_load", 00:05:02.899 "bdev_ftl_create", 00:05:02.899 "bdev_virtio_attach_controller", 00:05:02.899 "bdev_virtio_scsi_get_devices", 00:05:02.899 "bdev_virtio_detach_controller", 00:05:02.899 "bdev_virtio_blk_set_hotplug", 00:05:02.899 "bdev_iscsi_delete", 00:05:02.899 "bdev_iscsi_create", 00:05:02.899 "bdev_iscsi_set_options", 00:05:02.899 "accel_error_inject_error", 00:05:02.899 "ioat_scan_accel_module", 00:05:02.899 "dsa_scan_accel_module", 00:05:02.899 "iaa_scan_accel_module", 00:05:02.899 "keyring_file_remove_key", 00:05:02.899 "keyring_file_add_key", 00:05:02.899 "iscsi_set_options", 00:05:02.899 "iscsi_get_auth_groups", 00:05:02.899 "iscsi_auth_group_remove_secret", 00:05:02.899 "iscsi_auth_group_add_secret", 00:05:02.899 "iscsi_delete_auth_group", 00:05:02.899 "iscsi_create_auth_group", 00:05:02.899 "iscsi_set_discovery_auth", 00:05:02.899 "iscsi_get_options", 00:05:02.899 "iscsi_target_node_request_logout", 00:05:02.899 "iscsi_target_node_set_redirect", 00:05:02.899 "iscsi_target_node_set_auth", 00:05:02.899 "iscsi_target_node_add_lun", 00:05:02.899 "iscsi_get_stats", 00:05:02.899 "iscsi_get_connections", 00:05:02.899 "iscsi_portal_group_set_auth", 00:05:02.899 "iscsi_start_portal_group", 00:05:02.899 "iscsi_delete_portal_group", 00:05:02.899 "iscsi_create_portal_group", 00:05:02.899 "iscsi_get_portal_groups", 00:05:02.899 "iscsi_delete_target_node", 00:05:02.899 "iscsi_target_node_remove_pg_ig_maps", 00:05:02.899 "iscsi_target_node_add_pg_ig_maps", 00:05:02.899 "iscsi_create_target_node", 00:05:02.899 "iscsi_get_target_nodes", 00:05:02.899 "iscsi_delete_initiator_group", 00:05:02.899 "iscsi_initiator_group_remove_initiators", 00:05:02.899 "iscsi_initiator_group_add_initiators", 00:05:02.899 "iscsi_create_initiator_group", 00:05:02.899 "iscsi_get_initiator_groups", 00:05:02.899 "nvmf_set_crdt", 00:05:02.899 "nvmf_set_config", 00:05:02.899 "nvmf_set_max_subsystems", 00:05:02.899 "nvmf_subsystem_get_listeners", 00:05:02.899 "nvmf_subsystem_get_qpairs", 00:05:02.899 "nvmf_subsystem_get_controllers", 00:05:02.899 "nvmf_get_stats", 00:05:02.899 "nvmf_get_transports", 00:05:02.899 "nvmf_create_transport", 00:05:02.899 "nvmf_get_targets", 00:05:02.899 "nvmf_delete_target", 00:05:02.899 "nvmf_create_target", 00:05:02.899 "nvmf_subsystem_allow_any_host", 00:05:02.899 "nvmf_subsystem_remove_host", 00:05:02.899 "nvmf_subsystem_add_host", 00:05:02.899 "nvmf_ns_remove_host", 00:05:02.899 "nvmf_ns_add_host", 00:05:02.899 "nvmf_subsystem_remove_ns", 00:05:02.899 "nvmf_subsystem_add_ns", 00:05:02.900 "nvmf_subsystem_listener_set_ana_state", 00:05:02.900 "nvmf_discovery_get_referrals", 00:05:02.900 "nvmf_discovery_remove_referral", 00:05:02.900 "nvmf_discovery_add_referral", 00:05:02.900 "nvmf_subsystem_remove_listener", 00:05:02.900 "nvmf_subsystem_add_listener", 00:05:02.900 "nvmf_delete_subsystem", 00:05:02.900 "nvmf_create_subsystem", 00:05:02.900 "nvmf_get_subsystems", 00:05:02.900 "env_dpdk_get_mem_stats", 00:05:02.900 "nbd_get_disks", 00:05:02.900 "nbd_stop_disk", 00:05:02.900 "nbd_start_disk", 00:05:02.900 "ublk_recover_disk", 00:05:02.900 "ublk_get_disks", 00:05:02.900 "ublk_stop_disk", 00:05:02.900 "ublk_start_disk", 00:05:02.900 "ublk_destroy_target", 00:05:02.900 "ublk_create_target", 00:05:02.900 "virtio_blk_create_transport", 00:05:02.900 "virtio_blk_get_transports", 00:05:02.900 "vhost_controller_set_coalescing", 00:05:02.900 "vhost_get_controllers", 00:05:02.900 "vhost_delete_controller", 00:05:02.900 "vhost_create_blk_controller", 00:05:02.900 "vhost_scsi_controller_remove_target", 00:05:02.900 "vhost_scsi_controller_add_target", 00:05:02.900 "vhost_start_scsi_controller", 00:05:02.900 "vhost_create_scsi_controller", 00:05:02.900 "thread_set_cpumask", 00:05:02.900 "framework_get_scheduler", 00:05:02.900 "framework_set_scheduler", 00:05:02.900 "framework_get_reactors", 00:05:02.900 "thread_get_io_channels", 00:05:02.900 "thread_get_pollers", 00:05:02.900 "thread_get_stats", 00:05:02.900 "framework_monitor_context_switch", 00:05:02.900 "spdk_kill_instance", 00:05:02.900 "log_enable_timestamps", 00:05:02.900 "log_get_flags", 00:05:02.900 "log_clear_flag", 00:05:02.900 "log_set_flag", 00:05:02.900 "log_get_level", 00:05:02.900 "log_set_level", 00:05:02.900 "log_get_print_level", 00:05:02.900 "log_set_print_level", 00:05:02.900 "framework_enable_cpumask_locks", 00:05:02.900 "framework_disable_cpumask_locks", 00:05:02.900 "framework_wait_init", 00:05:02.900 "framework_start_init", 00:05:02.900 "scsi_get_devices", 00:05:02.900 "bdev_get_histogram", 00:05:02.900 "bdev_enable_histogram", 00:05:02.900 "bdev_set_qos_limit", 00:05:02.900 "bdev_set_qd_sampling_period", 00:05:02.900 "bdev_get_bdevs", 00:05:02.900 "bdev_reset_iostat", 00:05:02.900 "bdev_get_iostat", 00:05:02.900 "bdev_examine", 00:05:02.900 "bdev_wait_for_examine", 00:05:02.900 "bdev_set_options", 00:05:02.900 "notify_get_notifications", 00:05:02.900 "notify_get_types", 00:05:02.900 "accel_get_stats", 00:05:02.900 "accel_set_options", 00:05:02.900 "accel_set_driver", 00:05:02.900 "accel_crypto_key_destroy", 00:05:02.900 "accel_crypto_keys_get", 00:05:02.900 "accel_crypto_key_create", 00:05:02.900 "accel_assign_opc", 00:05:02.900 "accel_get_module_info", 00:05:02.900 "accel_get_opc_assignments", 00:05:02.900 "vmd_rescan", 00:05:02.900 "vmd_remove_device", 00:05:02.900 "vmd_enable", 00:05:02.900 "sock_set_default_impl", 00:05:02.900 "sock_impl_set_options", 00:05:02.900 "sock_impl_get_options", 00:05:02.900 "iobuf_get_stats", 00:05:02.900 "iobuf_set_options", 00:05:02.900 "framework_get_pci_devices", 00:05:02.900 "framework_get_config", 00:05:02.900 "framework_get_subsystems", 00:05:02.900 "trace_get_info", 00:05:02.900 "trace_get_tpoint_group_mask", 00:05:02.900 "trace_disable_tpoint_group", 00:05:02.900 "trace_enable_tpoint_group", 00:05:02.900 "trace_clear_tpoint_mask", 00:05:02.900 "trace_set_tpoint_mask", 00:05:02.900 "keyring_get_keys", 00:05:02.900 "spdk_get_version", 00:05:02.900 "rpc_get_methods" 00:05:02.900 ] 00:05:02.900 17:09:12 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:02.900 17:09:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:02.900 17:09:12 -- common/autotest_common.sh@10 -- # set +x 00:05:02.900 17:09:12 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:02.900 17:09:12 -- spdkcli/tcp.sh@38 -- # killprocess 2931388 00:05:02.900 17:09:12 -- common/autotest_common.sh@936 -- # '[' -z 2931388 ']' 00:05:02.900 17:09:12 -- common/autotest_common.sh@940 -- # kill -0 2931388 00:05:02.900 17:09:12 -- common/autotest_common.sh@941 -- # uname 00:05:02.900 17:09:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:02.900 17:09:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2931388 00:05:02.900 17:09:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:02.900 17:09:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:02.900 17:09:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2931388' 00:05:02.900 killing process with pid 2931388 00:05:02.900 17:09:12 -- common/autotest_common.sh@955 -- # kill 2931388 00:05:02.900 17:09:12 -- common/autotest_common.sh@960 -- # wait 2931388 00:05:03.467 00:05:03.467 real 0m1.515s 00:05:03.467 user 0m2.818s 00:05:03.467 sys 0m0.411s 00:05:03.467 17:09:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:03.467 17:09:12 -- common/autotest_common.sh@10 -- # set +x 00:05:03.467 ************************************ 00:05:03.467 END TEST spdkcli_tcp 00:05:03.467 ************************************ 00:05:03.467 17:09:12 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.467 17:09:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.467 17:09:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.467 17:09:12 -- common/autotest_common.sh@10 -- # set +x 00:05:03.467 ************************************ 00:05:03.467 START TEST dpdk_mem_utility 00:05:03.467 ************************************ 00:05:03.468 17:09:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.468 * Looking for test storage... 00:05:03.468 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:03.468 17:09:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:03.468 17:09:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2931817 00:05:03.468 17:09:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2931817 00:05:03.468 17:09:12 -- common/autotest_common.sh@817 -- # '[' -z 2931817 ']' 00:05:03.468 17:09:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.468 17:09:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:03.468 17:09:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.468 17:09:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:03.468 17:09:12 -- common/autotest_common.sh@10 -- # set +x 00:05:03.468 17:09:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.725 [2024-04-24 17:09:12.732940] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:03.726 [2024-04-24 17:09:12.732989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2931817 ] 00:05:03.726 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.726 [2024-04-24 17:09:12.788069] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.726 [2024-04-24 17:09:12.859438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.292 17:09:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:04.292 17:09:13 -- common/autotest_common.sh@850 -- # return 0 00:05:04.292 17:09:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:04.292 17:09:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:04.292 17:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.292 17:09:13 -- common/autotest_common.sh@10 -- # set +x 00:05:04.292 { 00:05:04.292 "filename": "/tmp/spdk_mem_dump.txt" 00:05:04.292 } 00:05:04.292 17:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.292 17:09:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:04.551 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:04.551 1 heaps totaling size 814.000000 MiB 00:05:04.551 size: 814.000000 MiB heap id: 0 00:05:04.551 end heaps---------- 00:05:04.551 8 mempools totaling size 598.116089 MiB 00:05:04.551 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:04.551 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:04.551 size: 84.521057 MiB name: bdev_io_2931817 00:05:04.551 size: 51.011292 MiB name: evtpool_2931817 00:05:04.551 size: 50.003479 MiB name: msgpool_2931817 00:05:04.551 size: 21.763794 MiB name: PDU_Pool 00:05:04.551 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:04.551 size: 0.026123 MiB name: Session_Pool 00:05:04.551 end mempools------- 00:05:04.551 6 memzones totaling size 4.142822 MiB 00:05:04.551 size: 1.000366 MiB name: RG_ring_0_2931817 00:05:04.551 size: 1.000366 MiB name: RG_ring_1_2931817 00:05:04.551 size: 1.000366 MiB name: RG_ring_4_2931817 00:05:04.551 size: 1.000366 MiB name: RG_ring_5_2931817 00:05:04.551 size: 0.125366 MiB name: RG_ring_2_2931817 00:05:04.551 size: 0.015991 MiB name: RG_ring_3_2931817 00:05:04.551 end memzones------- 00:05:04.551 17:09:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:04.551 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:04.551 list of free elements. size: 12.519348 MiB 00:05:04.551 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:04.551 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:04.551 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:04.551 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:04.551 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:04.551 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:04.551 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:04.551 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:04.551 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:04.551 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:04.551 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:04.551 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:04.551 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:04.551 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:04.551 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:04.551 list of standard malloc elements. size: 199.218079 MiB 00:05:04.551 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:04.551 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:04.551 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:04.551 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:04.551 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:04.551 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:04.551 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:04.551 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:04.551 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:04.551 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:04.551 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:04.551 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:04.551 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:04.551 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:04.551 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:04.551 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:04.551 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:04.551 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:04.551 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:04.551 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:04.551 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:04.551 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:04.551 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:04.551 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:04.551 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:04.551 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:04.551 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:04.551 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:04.551 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:04.551 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:04.551 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:04.551 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:04.551 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:04.551 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:04.551 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:04.551 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:04.551 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:04.551 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:04.551 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:04.551 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:04.551 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:04.551 list of memzone associated elements. size: 602.262573 MiB 00:05:04.551 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:04.551 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:04.551 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:04.551 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:04.551 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:04.551 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2931817_0 00:05:04.551 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:04.551 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2931817_0 00:05:04.551 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:04.551 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2931817_0 00:05:04.551 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:04.552 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:04.552 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:04.552 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:04.552 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:04.552 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2931817 00:05:04.552 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:04.552 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2931817 00:05:04.552 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:04.552 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2931817 00:05:04.552 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:04.552 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:04.552 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:04.552 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:04.552 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:04.552 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:04.552 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:04.552 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:04.552 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:04.552 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2931817 00:05:04.552 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:04.552 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2931817 00:05:04.552 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:04.552 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2931817 00:05:04.552 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:04.552 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2931817 00:05:04.552 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:04.552 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2931817 00:05:04.552 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:04.552 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:04.552 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:04.552 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:04.552 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:04.552 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:04.552 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:04.552 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2931817 00:05:04.552 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:04.552 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:04.552 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:04.552 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:04.552 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:04.552 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2931817 00:05:04.552 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:04.552 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:04.552 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:04.552 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2931817 00:05:04.552 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:04.552 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2931817 00:05:04.552 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:04.552 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:04.552 17:09:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:04.552 17:09:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2931817 00:05:04.552 17:09:13 -- common/autotest_common.sh@936 -- # '[' -z 2931817 ']' 00:05:04.552 17:09:13 -- common/autotest_common.sh@940 -- # kill -0 2931817 00:05:04.552 17:09:13 -- common/autotest_common.sh@941 -- # uname 00:05:04.552 17:09:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.552 17:09:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2931817 00:05:04.552 17:09:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.552 17:09:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.552 17:09:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2931817' 00:05:04.552 killing process with pid 2931817 00:05:04.552 17:09:13 -- common/autotest_common.sh@955 -- # kill 2931817 00:05:04.552 17:09:13 -- common/autotest_common.sh@960 -- # wait 2931817 00:05:04.810 00:05:04.811 real 0m1.419s 00:05:04.811 user 0m1.501s 00:05:04.811 sys 0m0.389s 00:05:04.811 17:09:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.811 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:05:04.811 ************************************ 00:05:04.811 END TEST dpdk_mem_utility 00:05:04.811 ************************************ 00:05:04.811 17:09:14 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:04.811 17:09:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.811 17:09:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.811 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:05:05.069 ************************************ 00:05:05.069 START TEST event 00:05:05.069 ************************************ 00:05:05.069 17:09:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:05.069 * Looking for test storage... 00:05:05.069 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:05.069 17:09:14 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:05.069 17:09:14 -- bdev/nbd_common.sh@6 -- # set -e 00:05:05.069 17:09:14 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.069 17:09:14 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:05.069 17:09:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.069 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:05:05.328 ************************************ 00:05:05.328 START TEST event_perf 00:05:05.328 ************************************ 00:05:05.328 17:09:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.328 Running I/O for 1 seconds...[2024-04-24 17:09:14.426186] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:05.328 [2024-04-24 17:09:14.426253] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2932223 ] 00:05:05.328 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.328 [2024-04-24 17:09:14.485964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:05.328 [2024-04-24 17:09:14.558800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.328 [2024-04-24 17:09:14.558901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.328 [2024-04-24 17:09:14.558923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.328 [2024-04-24 17:09:14.558925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.702 Running I/O for 1 seconds... 00:05:06.702 lcore 0: 206659 00:05:06.702 lcore 1: 206659 00:05:06.702 lcore 2: 206659 00:05:06.702 lcore 3: 206659 00:05:06.702 done. 00:05:06.702 00:05:06.702 real 0m1.241s 00:05:06.702 user 0m4.153s 00:05:06.702 sys 0m0.084s 00:05:06.702 17:09:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.702 17:09:15 -- common/autotest_common.sh@10 -- # set +x 00:05:06.702 ************************************ 00:05:06.702 END TEST event_perf 00:05:06.702 ************************************ 00:05:06.702 17:09:15 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:06.702 17:09:15 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:06.702 17:09:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.702 17:09:15 -- common/autotest_common.sh@10 -- # set +x 00:05:06.702 ************************************ 00:05:06.702 START TEST event_reactor 00:05:06.702 ************************************ 00:05:06.702 17:09:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:06.702 [2024-04-24 17:09:15.848424] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:06.702 [2024-04-24 17:09:15.848494] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2932483 ] 00:05:06.702 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.702 [2024-04-24 17:09:15.910821] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.960 [2024-04-24 17:09:15.991504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.896 test_start 00:05:07.896 oneshot 00:05:07.896 tick 100 00:05:07.896 tick 100 00:05:07.896 tick 250 00:05:07.896 tick 100 00:05:07.896 tick 100 00:05:07.896 tick 100 00:05:07.896 tick 250 00:05:07.896 tick 500 00:05:07.896 tick 100 00:05:07.896 tick 100 00:05:07.896 tick 250 00:05:07.896 tick 100 00:05:07.896 tick 100 00:05:07.896 test_end 00:05:07.896 00:05:07.896 real 0m1.248s 00:05:07.896 user 0m1.161s 00:05:07.896 sys 0m0.083s 00:05:07.896 17:09:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.896 17:09:17 -- common/autotest_common.sh@10 -- # set +x 00:05:07.896 ************************************ 00:05:07.896 END TEST event_reactor 00:05:07.896 ************************************ 00:05:07.896 17:09:17 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.896 17:09:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:07.896 17:09:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.896 17:09:17 -- common/autotest_common.sh@10 -- # set +x 00:05:08.154 ************************************ 00:05:08.154 START TEST event_reactor_perf 00:05:08.154 ************************************ 00:05:08.154 17:09:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.154 [2024-04-24 17:09:17.270723] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:08.154 [2024-04-24 17:09:17.270798] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2932742 ] 00:05:08.154 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.154 [2024-04-24 17:09:17.329430] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.413 [2024-04-24 17:09:17.406946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.349 test_start 00:05:09.349 test_end 00:05:09.349 Performance: 514045 events per second 00:05:09.349 00:05:09.349 real 0m1.245s 00:05:09.349 user 0m1.167s 00:05:09.349 sys 0m0.074s 00:05:09.349 17:09:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.349 17:09:18 -- common/autotest_common.sh@10 -- # set +x 00:05:09.349 ************************************ 00:05:09.349 END TEST event_reactor_perf 00:05:09.349 ************************************ 00:05:09.349 17:09:18 -- event/event.sh@49 -- # uname -s 00:05:09.349 17:09:18 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:09.349 17:09:18 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:09.349 17:09:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.349 17:09:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.349 17:09:18 -- common/autotest_common.sh@10 -- # set +x 00:05:09.607 ************************************ 00:05:09.607 START TEST event_scheduler 00:05:09.607 ************************************ 00:05:09.607 17:09:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:09.607 * Looking for test storage... 00:05:09.607 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:09.607 17:09:18 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:09.607 17:09:18 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2933030 00:05:09.607 17:09:18 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:09.607 17:09:18 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.607 17:09:18 -- scheduler/scheduler.sh@37 -- # waitforlisten 2933030 00:05:09.607 17:09:18 -- common/autotest_common.sh@817 -- # '[' -z 2933030 ']' 00:05:09.607 17:09:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.607 17:09:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:09.607 17:09:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.607 17:09:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:09.607 17:09:18 -- common/autotest_common.sh@10 -- # set +x 00:05:09.607 [2024-04-24 17:09:18.790457] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:09.607 [2024-04-24 17:09:18.790507] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2933030 ] 00:05:09.607 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.607 [2024-04-24 17:09:18.839083] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:09.865 [2024-04-24 17:09:18.913699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.866 [2024-04-24 17:09:18.913787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.866 [2024-04-24 17:09:18.913876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.866 [2024-04-24 17:09:18.913878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.432 17:09:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:10.432 17:09:19 -- common/autotest_common.sh@850 -- # return 0 00:05:10.432 17:09:19 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:10.432 17:09:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.432 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:10.432 POWER: Env isn't set yet! 00:05:10.432 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:10.432 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:10.432 POWER: Cannot set governor of lcore 0 to userspace 00:05:10.432 POWER: Attempting to initialise PSTAT power management... 00:05:10.432 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:10.432 POWER: Initialized successfully for lcore 0 power management 00:05:10.432 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:10.432 POWER: Initialized successfully for lcore 1 power management 00:05:10.432 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:10.432 POWER: Initialized successfully for lcore 2 power management 00:05:10.432 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:10.432 POWER: Initialized successfully for lcore 3 power management 00:05:10.432 17:09:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.432 17:09:19 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:10.432 17:09:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.432 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:10.690 [2024-04-24 17:09:19.723863] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:10.690 17:09:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.690 17:09:19 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:10.690 17:09:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.690 17:09:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.690 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:10.690 ************************************ 00:05:10.690 START TEST scheduler_create_thread 00:05:10.690 ************************************ 00:05:10.690 17:09:19 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:10.690 17:09:19 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:10.690 17:09:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.690 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:10.690 2 00:05:10.690 17:09:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.690 17:09:19 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:10.690 17:09:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.690 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:10.690 3 00:05:10.690 17:09:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.690 17:09:19 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:10.690 17:09:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.690 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:10.690 4 00:05:10.690 17:09:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.690 17:09:19 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:10.690 17:09:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.690 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:10.690 5 00:05:10.690 17:09:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.690 17:09:19 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:10.690 17:09:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.690 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:10.690 6 00:05:10.690 17:09:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.690 17:09:19 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:10.690 17:09:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.690 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:10.690 7 00:05:10.690 17:09:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.690 17:09:19 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:10.690 17:09:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.690 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:10.690 8 00:05:10.690 17:09:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.690 17:09:19 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:10.690 17:09:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.690 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:10.948 9 00:05:10.948 17:09:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.948 17:09:19 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:10.948 17:09:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.948 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:10.948 10 00:05:10.948 17:09:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.948 17:09:19 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:10.948 17:09:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.948 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:10.948 17:09:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.948 17:09:19 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:10.948 17:09:19 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:10.948 17:09:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:10.948 17:09:19 -- common/autotest_common.sh@10 -- # set +x 00:05:11.882 17:09:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.882 17:09:20 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:11.882 17:09:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.882 17:09:20 -- common/autotest_common.sh@10 -- # set +x 00:05:13.254 17:09:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:13.254 17:09:22 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:13.254 17:09:22 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:13.254 17:09:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:13.254 17:09:22 -- common/autotest_common.sh@10 -- # set +x 00:05:14.187 17:09:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.187 00:05:14.188 real 0m3.382s 00:05:14.188 user 0m0.025s 00:05:14.188 sys 0m0.003s 00:05:14.188 17:09:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.188 17:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:14.188 ************************************ 00:05:14.188 END TEST scheduler_create_thread 00:05:14.188 ************************************ 00:05:14.188 17:09:23 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:14.188 17:09:23 -- scheduler/scheduler.sh@46 -- # killprocess 2933030 00:05:14.188 17:09:23 -- common/autotest_common.sh@936 -- # '[' -z 2933030 ']' 00:05:14.188 17:09:23 -- common/autotest_common.sh@940 -- # kill -0 2933030 00:05:14.188 17:09:23 -- common/autotest_common.sh@941 -- # uname 00:05:14.188 17:09:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.188 17:09:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2933030 00:05:14.188 17:09:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:14.188 17:09:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:14.188 17:09:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2933030' 00:05:14.188 killing process with pid 2933030 00:05:14.188 17:09:23 -- common/autotest_common.sh@955 -- # kill 2933030 00:05:14.188 17:09:23 -- common/autotest_common.sh@960 -- # wait 2933030 00:05:14.446 [2024-04-24 17:09:23.616396] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:14.704 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:14.704 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:14.704 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:14.704 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:14.704 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:14.704 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:14.704 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:14.704 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:14.704 00:05:14.704 real 0m5.211s 00:05:14.704 user 0m10.807s 00:05:14.704 sys 0m0.402s 00:05:14.704 17:09:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.704 17:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:14.704 ************************************ 00:05:14.704 END TEST event_scheduler 00:05:14.704 ************************************ 00:05:14.704 17:09:23 -- event/event.sh@51 -- # modprobe -n nbd 00:05:14.704 17:09:23 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:14.704 17:09:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.704 17:09:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.704 17:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:14.963 ************************************ 00:05:14.963 START TEST app_repeat 00:05:14.963 ************************************ 00:05:14.963 17:09:24 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:14.963 17:09:24 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.963 17:09:24 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.963 17:09:24 -- event/event.sh@13 -- # local nbd_list 00:05:14.963 17:09:24 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.963 17:09:24 -- event/event.sh@14 -- # local bdev_list 00:05:14.963 17:09:24 -- event/event.sh@15 -- # local repeat_times=4 00:05:14.963 17:09:24 -- event/event.sh@17 -- # modprobe nbd 00:05:14.963 17:09:24 -- event/event.sh@19 -- # repeat_pid=2934009 00:05:14.963 17:09:24 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.963 17:09:24 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:14.963 17:09:24 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2934009' 00:05:14.963 Process app_repeat pid: 2934009 00:05:14.963 17:09:24 -- event/event.sh@23 -- # for i in {0..2} 00:05:14.963 17:09:24 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:14.963 spdk_app_start Round 0 00:05:14.963 17:09:24 -- event/event.sh@25 -- # waitforlisten 2934009 /var/tmp/spdk-nbd.sock 00:05:14.963 17:09:24 -- common/autotest_common.sh@817 -- # '[' -z 2934009 ']' 00:05:14.963 17:09:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.963 17:09:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:14.963 17:09:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.963 17:09:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:14.963 17:09:24 -- common/autotest_common.sh@10 -- # set +x 00:05:14.963 [2024-04-24 17:09:24.097952] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:14.963 [2024-04-24 17:09:24.098008] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2934009 ] 00:05:14.963 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.963 [2024-04-24 17:09:24.156911] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.222 [2024-04-24 17:09:24.234365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.222 [2024-04-24 17:09:24.234367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.789 17:09:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:15.790 17:09:24 -- common/autotest_common.sh@850 -- # return 0 00:05:15.790 17:09:24 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.049 Malloc0 00:05:16.049 17:09:25 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.049 Malloc1 00:05:16.049 17:09:25 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@12 -- # local i 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.049 17:09:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.308 /dev/nbd0 00:05:16.308 17:09:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.308 17:09:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.308 17:09:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:16.308 17:09:25 -- common/autotest_common.sh@855 -- # local i 00:05:16.308 17:09:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:16.308 17:09:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:16.308 17:09:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:16.308 17:09:25 -- common/autotest_common.sh@859 -- # break 00:05:16.308 17:09:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:16.308 17:09:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:16.308 17:09:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.308 1+0 records in 00:05:16.308 1+0 records out 00:05:16.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184711 s, 22.2 MB/s 00:05:16.308 17:09:25 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:16.308 17:09:25 -- common/autotest_common.sh@872 -- # size=4096 00:05:16.308 17:09:25 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:16.308 17:09:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:16.308 17:09:25 -- common/autotest_common.sh@875 -- # return 0 00:05:16.308 17:09:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.308 17:09:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.308 17:09:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.568 /dev/nbd1 00:05:16.568 17:09:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.568 17:09:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.568 17:09:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:16.568 17:09:25 -- common/autotest_common.sh@855 -- # local i 00:05:16.568 17:09:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:16.568 17:09:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:16.568 17:09:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:16.568 17:09:25 -- common/autotest_common.sh@859 -- # break 00:05:16.568 17:09:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:16.568 17:09:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:16.568 17:09:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.568 1+0 records in 00:05:16.568 1+0 records out 00:05:16.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204536 s, 20.0 MB/s 00:05:16.568 17:09:25 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:16.568 17:09:25 -- common/autotest_common.sh@872 -- # size=4096 00:05:16.568 17:09:25 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:16.568 17:09:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:16.568 17:09:25 -- common/autotest_common.sh@875 -- # return 0 00:05:16.568 17:09:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.568 17:09:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.568 17:09:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.568 17:09:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.568 17:09:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.828 { 00:05:16.828 "nbd_device": "/dev/nbd0", 00:05:16.828 "bdev_name": "Malloc0" 00:05:16.828 }, 00:05:16.828 { 00:05:16.828 "nbd_device": "/dev/nbd1", 00:05:16.828 "bdev_name": "Malloc1" 00:05:16.828 } 00:05:16.828 ]' 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.828 { 00:05:16.828 "nbd_device": "/dev/nbd0", 00:05:16.828 "bdev_name": "Malloc0" 00:05:16.828 }, 00:05:16.828 { 00:05:16.828 "nbd_device": "/dev/nbd1", 00:05:16.828 "bdev_name": "Malloc1" 00:05:16.828 } 00:05:16.828 ]' 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.828 /dev/nbd1' 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.828 /dev/nbd1' 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.828 256+0 records in 00:05:16.828 256+0 records out 00:05:16.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103437 s, 101 MB/s 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.828 256+0 records in 00:05:16.828 256+0 records out 00:05:16.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131979 s, 79.5 MB/s 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.828 256+0 records in 00:05:16.828 256+0 records out 00:05:16.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149513 s, 70.1 MB/s 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@51 -- # local i 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.828 17:09:25 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.087 17:09:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.087 17:09:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.087 17:09:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.087 17:09:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.087 17:09:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.087 17:09:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.087 17:09:26 -- bdev/nbd_common.sh@41 -- # break 00:05:17.087 17:09:26 -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.087 17:09:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.087 17:09:26 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@41 -- # break 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@65 -- # true 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.346 17:09:26 -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.346 17:09:26 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.605 17:09:26 -- event/event.sh@35 -- # sleep 3 00:05:17.865 [2024-04-24 17:09:26.981100] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.865 [2024-04-24 17:09:27.045915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.865 [2024-04-24 17:09:27.045916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.865 [2024-04-24 17:09:27.086812] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.865 [2024-04-24 17:09:27.086865] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.233 17:09:29 -- event/event.sh@23 -- # for i in {0..2} 00:05:21.233 17:09:29 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:21.233 spdk_app_start Round 1 00:05:21.233 17:09:29 -- event/event.sh@25 -- # waitforlisten 2934009 /var/tmp/spdk-nbd.sock 00:05:21.233 17:09:29 -- common/autotest_common.sh@817 -- # '[' -z 2934009 ']' 00:05:21.233 17:09:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.233 17:09:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:21.233 17:09:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.233 17:09:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:21.233 17:09:29 -- common/autotest_common.sh@10 -- # set +x 00:05:21.233 17:09:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:21.233 17:09:29 -- common/autotest_common.sh@850 -- # return 0 00:05:21.233 17:09:29 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.233 Malloc0 00:05:21.233 17:09:30 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.233 Malloc1 00:05:21.233 17:09:30 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@12 -- # local i 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.233 17:09:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.502 /dev/nbd0 00:05:21.502 17:09:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.502 17:09:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.502 17:09:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:21.502 17:09:30 -- common/autotest_common.sh@855 -- # local i 00:05:21.502 17:09:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:21.502 17:09:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:21.502 17:09:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:21.502 17:09:30 -- common/autotest_common.sh@859 -- # break 00:05:21.502 17:09:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:21.502 17:09:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:21.502 17:09:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.502 1+0 records in 00:05:21.502 1+0 records out 00:05:21.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180951 s, 22.6 MB/s 00:05:21.502 17:09:30 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:21.502 17:09:30 -- common/autotest_common.sh@872 -- # size=4096 00:05:21.502 17:09:30 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:21.502 17:09:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:21.502 17:09:30 -- common/autotest_common.sh@875 -- # return 0 00:05:21.502 17:09:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.502 17:09:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.502 17:09:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.502 /dev/nbd1 00:05:21.502 17:09:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.502 17:09:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.502 17:09:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:21.502 17:09:30 -- common/autotest_common.sh@855 -- # local i 00:05:21.502 17:09:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:21.502 17:09:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:21.502 17:09:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:21.502 17:09:30 -- common/autotest_common.sh@859 -- # break 00:05:21.502 17:09:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:21.502 17:09:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:21.502 17:09:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.502 1+0 records in 00:05:21.502 1+0 records out 00:05:21.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191351 s, 21.4 MB/s 00:05:21.502 17:09:30 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:21.502 17:09:30 -- common/autotest_common.sh@872 -- # size=4096 00:05:21.502 17:09:30 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:21.502 17:09:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:21.502 17:09:30 -- common/autotest_common.sh@875 -- # return 0 00:05:21.502 17:09:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.502 17:09:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.502 17:09:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.502 17:09:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.502 17:09:30 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:21.761 { 00:05:21.761 "nbd_device": "/dev/nbd0", 00:05:21.761 "bdev_name": "Malloc0" 00:05:21.761 }, 00:05:21.761 { 00:05:21.761 "nbd_device": "/dev/nbd1", 00:05:21.761 "bdev_name": "Malloc1" 00:05:21.761 } 00:05:21.761 ]' 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:21.761 { 00:05:21.761 "nbd_device": "/dev/nbd0", 00:05:21.761 "bdev_name": "Malloc0" 00:05:21.761 }, 00:05:21.761 { 00:05:21.761 "nbd_device": "/dev/nbd1", 00:05:21.761 "bdev_name": "Malloc1" 00:05:21.761 } 00:05:21.761 ]' 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:21.761 /dev/nbd1' 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:21.761 /dev/nbd1' 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@65 -- # count=2 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@95 -- # count=2 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:21.761 256+0 records in 00:05:21.761 256+0 records out 00:05:21.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103659 s, 101 MB/s 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:21.761 256+0 records in 00:05:21.761 256+0 records out 00:05:21.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013491 s, 77.7 MB/s 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:21.761 256+0 records in 00:05:21.761 256+0 records out 00:05:21.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144821 s, 72.4 MB/s 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@51 -- # local i 00:05:21.761 17:09:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.761 17:09:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.020 17:09:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.020 17:09:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.020 17:09:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.020 17:09:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.020 17:09:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.020 17:09:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.020 17:09:31 -- bdev/nbd_common.sh@41 -- # break 00:05:22.020 17:09:31 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.020 17:09:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.020 17:09:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.278 17:09:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.278 17:09:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.278 17:09:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.278 17:09:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.278 17:09:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.278 17:09:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.278 17:09:31 -- bdev/nbd_common.sh@41 -- # break 00:05:22.278 17:09:31 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.278 17:09:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.278 17:09:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.278 17:09:31 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.538 17:09:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.538 17:09:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.538 17:09:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.538 17:09:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.538 17:09:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.538 17:09:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.538 17:09:31 -- bdev/nbd_common.sh@65 -- # true 00:05:22.538 17:09:31 -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.538 17:09:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.538 17:09:31 -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.538 17:09:31 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.538 17:09:31 -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.538 17:09:31 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:22.538 17:09:31 -- event/event.sh@35 -- # sleep 3 00:05:22.797 [2024-04-24 17:09:31.984835] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.056 [2024-04-24 17:09:32.051071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.056 [2024-04-24 17:09:32.051073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.056 [2024-04-24 17:09:32.093182] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.056 [2024-04-24 17:09:32.093225] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.590 17:09:34 -- event/event.sh@23 -- # for i in {0..2} 00:05:25.590 17:09:34 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:25.590 spdk_app_start Round 2 00:05:25.590 17:09:34 -- event/event.sh@25 -- # waitforlisten 2934009 /var/tmp/spdk-nbd.sock 00:05:25.590 17:09:34 -- common/autotest_common.sh@817 -- # '[' -z 2934009 ']' 00:05:25.590 17:09:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.590 17:09:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:25.590 17:09:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.590 17:09:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:25.590 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:05:25.849 17:09:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:25.849 17:09:34 -- common/autotest_common.sh@850 -- # return 0 00:05:25.849 17:09:34 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.107 Malloc0 00:05:26.107 17:09:35 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.107 Malloc1 00:05:26.107 17:09:35 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.107 17:09:35 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.107 17:09:35 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.107 17:09:35 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.107 17:09:35 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.107 17:09:35 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.107 17:09:35 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.108 17:09:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.108 17:09:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.108 17:09:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.108 17:09:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.108 17:09:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.108 17:09:35 -- bdev/nbd_common.sh@12 -- # local i 00:05:26.108 17:09:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.108 17:09:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.108 17:09:35 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.366 /dev/nbd0 00:05:26.366 17:09:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.366 17:09:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.366 17:09:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:26.366 17:09:35 -- common/autotest_common.sh@855 -- # local i 00:05:26.366 17:09:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:26.366 17:09:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:26.366 17:09:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:26.366 17:09:35 -- common/autotest_common.sh@859 -- # break 00:05:26.366 17:09:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:26.366 17:09:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:26.366 17:09:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.366 1+0 records in 00:05:26.367 1+0 records out 00:05:26.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214266 s, 19.1 MB/s 00:05:26.367 17:09:35 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:26.367 17:09:35 -- common/autotest_common.sh@872 -- # size=4096 00:05:26.367 17:09:35 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:26.367 17:09:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:26.367 17:09:35 -- common/autotest_common.sh@875 -- # return 0 00:05:26.367 17:09:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.367 17:09:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.367 17:09:35 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.625 /dev/nbd1 00:05:26.625 17:09:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.625 17:09:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.625 17:09:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:26.625 17:09:35 -- common/autotest_common.sh@855 -- # local i 00:05:26.625 17:09:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:26.625 17:09:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:26.625 17:09:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:26.625 17:09:35 -- common/autotest_common.sh@859 -- # break 00:05:26.625 17:09:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:26.625 17:09:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:26.625 17:09:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.625 1+0 records in 00:05:26.625 1+0 records out 00:05:26.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189626 s, 21.6 MB/s 00:05:26.625 17:09:35 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:26.625 17:09:35 -- common/autotest_common.sh@872 -- # size=4096 00:05:26.625 17:09:35 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:26.625 17:09:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:26.625 17:09:35 -- common/autotest_common.sh@875 -- # return 0 00:05:26.625 17:09:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.625 17:09:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.625 17:09:35 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.625 17:09:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.625 17:09:35 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.625 17:09:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.625 { 00:05:26.625 "nbd_device": "/dev/nbd0", 00:05:26.625 "bdev_name": "Malloc0" 00:05:26.625 }, 00:05:26.625 { 00:05:26.625 "nbd_device": "/dev/nbd1", 00:05:26.625 "bdev_name": "Malloc1" 00:05:26.625 } 00:05:26.625 ]' 00:05:26.625 17:09:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.625 { 00:05:26.625 "nbd_device": "/dev/nbd0", 00:05:26.625 "bdev_name": "Malloc0" 00:05:26.625 }, 00:05:26.625 { 00:05:26.625 "nbd_device": "/dev/nbd1", 00:05:26.625 "bdev_name": "Malloc1" 00:05:26.625 } 00:05:26.625 ]' 00:05:26.625 17:09:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.883 /dev/nbd1' 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.883 /dev/nbd1' 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.883 256+0 records in 00:05:26.883 256+0 records out 00:05:26.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103103 s, 102 MB/s 00:05:26.883 17:09:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.884 256+0 records in 00:05:26.884 256+0 records out 00:05:26.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138034 s, 76.0 MB/s 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.884 256+0 records in 00:05:26.884 256+0 records out 00:05:26.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014532 s, 72.2 MB/s 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@51 -- # local i 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.884 17:09:35 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@41 -- # break 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@41 -- # break 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.141 17:09:36 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.399 17:09:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.399 17:09:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.399 17:09:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.399 17:09:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.399 17:09:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.399 17:09:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.399 17:09:36 -- bdev/nbd_common.sh@65 -- # true 00:05:27.399 17:09:36 -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.399 17:09:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.399 17:09:36 -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.399 17:09:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.399 17:09:36 -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.399 17:09:36 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.657 17:09:36 -- event/event.sh@35 -- # sleep 3 00:05:27.916 [2024-04-24 17:09:36.960453] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.916 [2024-04-24 17:09:37.025749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.916 [2024-04-24 17:09:37.025750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.916 [2024-04-24 17:09:37.067451] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.916 [2024-04-24 17:09:37.067495] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.198 17:09:39 -- event/event.sh@38 -- # waitforlisten 2934009 /var/tmp/spdk-nbd.sock 00:05:31.198 17:09:39 -- common/autotest_common.sh@817 -- # '[' -z 2934009 ']' 00:05:31.198 17:09:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.198 17:09:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.198 17:09:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.198 17:09:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.198 17:09:39 -- common/autotest_common.sh@10 -- # set +x 00:05:31.198 17:09:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:31.198 17:09:39 -- common/autotest_common.sh@850 -- # return 0 00:05:31.198 17:09:39 -- event/event.sh@39 -- # killprocess 2934009 00:05:31.198 17:09:39 -- common/autotest_common.sh@936 -- # '[' -z 2934009 ']' 00:05:31.198 17:09:39 -- common/autotest_common.sh@940 -- # kill -0 2934009 00:05:31.198 17:09:39 -- common/autotest_common.sh@941 -- # uname 00:05:31.198 17:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.198 17:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2934009 00:05:31.198 17:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:31.198 17:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:31.198 17:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2934009' 00:05:31.198 killing process with pid 2934009 00:05:31.198 17:09:39 -- common/autotest_common.sh@955 -- # kill 2934009 00:05:31.198 17:09:39 -- common/autotest_common.sh@960 -- # wait 2934009 00:05:31.198 spdk_app_start is called in Round 0. 00:05:31.198 Shutdown signal received, stop current app iteration 00:05:31.198 Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 reinitialization... 00:05:31.198 spdk_app_start is called in Round 1. 00:05:31.198 Shutdown signal received, stop current app iteration 00:05:31.198 Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 reinitialization... 00:05:31.198 spdk_app_start is called in Round 2. 00:05:31.198 Shutdown signal received, stop current app iteration 00:05:31.198 Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 reinitialization... 00:05:31.198 spdk_app_start is called in Round 3. 00:05:31.198 Shutdown signal received, stop current app iteration 00:05:31.198 17:09:40 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:31.198 17:09:40 -- event/event.sh@42 -- # return 0 00:05:31.198 00:05:31.198 real 0m16.096s 00:05:31.198 user 0m34.696s 00:05:31.198 sys 0m2.291s 00:05:31.198 17:09:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.198 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:05:31.198 ************************************ 00:05:31.198 END TEST app_repeat 00:05:31.198 ************************************ 00:05:31.198 17:09:40 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:31.198 17:09:40 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:31.198 17:09:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.199 17:09:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.199 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:05:31.199 ************************************ 00:05:31.199 START TEST cpu_locks 00:05:31.199 ************************************ 00:05:31.199 17:09:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:31.199 * Looking for test storage... 00:05:31.199 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:31.199 17:09:40 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:31.199 17:09:40 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:31.199 17:09:40 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:31.199 17:09:40 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:31.199 17:09:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.199 17:09:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.199 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:05:31.457 ************************************ 00:05:31.457 START TEST default_locks 00:05:31.457 ************************************ 00:05:31.457 17:09:40 -- common/autotest_common.sh@1111 -- # default_locks 00:05:31.457 17:09:40 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2937013 00:05:31.457 17:09:40 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.457 17:09:40 -- event/cpu_locks.sh@47 -- # waitforlisten 2937013 00:05:31.457 17:09:40 -- common/autotest_common.sh@817 -- # '[' -z 2937013 ']' 00:05:31.457 17:09:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.457 17:09:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.457 17:09:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.457 17:09:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.457 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:05:31.457 [2024-04-24 17:09:40.562008] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:31.457 [2024-04-24 17:09:40.562046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937013 ] 00:05:31.457 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.457 [2024-04-24 17:09:40.617617] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.457 [2024-04-24 17:09:40.694915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.392 17:09:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.392 17:09:41 -- common/autotest_common.sh@850 -- # return 0 00:05:32.392 17:09:41 -- event/cpu_locks.sh@49 -- # locks_exist 2937013 00:05:32.393 17:09:41 -- event/cpu_locks.sh@22 -- # lslocks -p 2937013 00:05:32.393 17:09:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.393 lslocks: write error 00:05:32.393 17:09:41 -- event/cpu_locks.sh@50 -- # killprocess 2937013 00:05:32.393 17:09:41 -- common/autotest_common.sh@936 -- # '[' -z 2937013 ']' 00:05:32.393 17:09:41 -- common/autotest_common.sh@940 -- # kill -0 2937013 00:05:32.393 17:09:41 -- common/autotest_common.sh@941 -- # uname 00:05:32.393 17:09:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.393 17:09:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2937013 00:05:32.393 17:09:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:32.393 17:09:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:32.393 17:09:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2937013' 00:05:32.393 killing process with pid 2937013 00:05:32.393 17:09:41 -- common/autotest_common.sh@955 -- # kill 2937013 00:05:32.393 17:09:41 -- common/autotest_common.sh@960 -- # wait 2937013 00:05:32.960 17:09:41 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2937013 00:05:32.960 17:09:41 -- common/autotest_common.sh@638 -- # local es=0 00:05:32.960 17:09:41 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2937013 00:05:32.960 17:09:41 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:32.960 17:09:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:32.960 17:09:41 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:32.960 17:09:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:32.960 17:09:41 -- common/autotest_common.sh@641 -- # waitforlisten 2937013 00:05:32.960 17:09:41 -- common/autotest_common.sh@817 -- # '[' -z 2937013 ']' 00:05:32.960 17:09:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.960 17:09:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:32.960 17:09:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.960 17:09:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:32.960 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:05:32.960 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2937013) - No such process 00:05:32.960 ERROR: process (pid: 2937013) is no longer running 00:05:32.960 17:09:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.960 17:09:41 -- common/autotest_common.sh@850 -- # return 1 00:05:32.960 17:09:41 -- common/autotest_common.sh@641 -- # es=1 00:05:32.960 17:09:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:32.960 17:09:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:32.960 17:09:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:32.960 17:09:41 -- event/cpu_locks.sh@54 -- # no_locks 00:05:32.960 17:09:41 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.960 17:09:41 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.960 17:09:41 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.960 00:05:32.960 real 0m1.437s 00:05:32.960 user 0m1.496s 00:05:32.960 sys 0m0.455s 00:05:32.960 17:09:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.960 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:05:32.960 ************************************ 00:05:32.960 END TEST default_locks 00:05:32.960 ************************************ 00:05:32.960 17:09:41 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:32.960 17:09:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.960 17:09:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.960 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:05:32.960 ************************************ 00:05:32.960 START TEST default_locks_via_rpc 00:05:32.960 ************************************ 00:05:32.960 17:09:42 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:32.960 17:09:42 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2937285 00:05:32.960 17:09:42 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.960 17:09:42 -- event/cpu_locks.sh@63 -- # waitforlisten 2937285 00:05:32.960 17:09:42 -- common/autotest_common.sh@817 -- # '[' -z 2937285 ']' 00:05:32.960 17:09:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.960 17:09:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:32.960 17:09:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.960 17:09:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:32.960 17:09:42 -- common/autotest_common.sh@10 -- # set +x 00:05:32.960 [2024-04-24 17:09:42.162412] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:32.960 [2024-04-24 17:09:42.162452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937285 ] 00:05:32.960 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.219 [2024-04-24 17:09:42.216537] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.219 [2024-04-24 17:09:42.286313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.785 17:09:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:33.785 17:09:42 -- common/autotest_common.sh@850 -- # return 0 00:05:33.785 17:09:42 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:33.785 17:09:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.785 17:09:42 -- common/autotest_common.sh@10 -- # set +x 00:05:33.785 17:09:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.785 17:09:42 -- event/cpu_locks.sh@67 -- # no_locks 00:05:33.785 17:09:42 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:33.785 17:09:42 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:33.785 17:09:42 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:33.785 17:09:42 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:33.785 17:09:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.785 17:09:42 -- common/autotest_common.sh@10 -- # set +x 00:05:33.785 17:09:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.785 17:09:42 -- event/cpu_locks.sh@71 -- # locks_exist 2937285 00:05:33.785 17:09:42 -- event/cpu_locks.sh@22 -- # lslocks -p 2937285 00:05:33.785 17:09:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.351 17:09:43 -- event/cpu_locks.sh@73 -- # killprocess 2937285 00:05:34.351 17:09:43 -- common/autotest_common.sh@936 -- # '[' -z 2937285 ']' 00:05:34.351 17:09:43 -- common/autotest_common.sh@940 -- # kill -0 2937285 00:05:34.351 17:09:43 -- common/autotest_common.sh@941 -- # uname 00:05:34.351 17:09:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.351 17:09:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2937285 00:05:34.351 17:09:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:34.351 17:09:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:34.351 17:09:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2937285' 00:05:34.351 killing process with pid 2937285 00:05:34.351 17:09:43 -- common/autotest_common.sh@955 -- # kill 2937285 00:05:34.351 17:09:43 -- common/autotest_common.sh@960 -- # wait 2937285 00:05:34.609 00:05:34.609 real 0m1.671s 00:05:34.609 user 0m1.764s 00:05:34.609 sys 0m0.527s 00:05:34.609 17:09:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.609 17:09:43 -- common/autotest_common.sh@10 -- # set +x 00:05:34.609 ************************************ 00:05:34.609 END TEST default_locks_via_rpc 00:05:34.609 ************************************ 00:05:34.610 17:09:43 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:34.610 17:09:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.610 17:09:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.610 17:09:43 -- common/autotest_common.sh@10 -- # set +x 00:05:34.868 ************************************ 00:05:34.868 START TEST non_locking_app_on_locked_coremask 00:05:34.868 ************************************ 00:05:34.868 17:09:43 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:34.868 17:09:43 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2937556 00:05:34.868 17:09:43 -- event/cpu_locks.sh@81 -- # waitforlisten 2937556 /var/tmp/spdk.sock 00:05:34.868 17:09:43 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.868 17:09:43 -- common/autotest_common.sh@817 -- # '[' -z 2937556 ']' 00:05:34.868 17:09:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.868 17:09:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:34.868 17:09:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.868 17:09:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:34.868 17:09:43 -- common/autotest_common.sh@10 -- # set +x 00:05:34.868 [2024-04-24 17:09:43.982839] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:34.868 [2024-04-24 17:09:43.982880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937556 ] 00:05:34.868 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.868 [2024-04-24 17:09:44.035753] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.868 [2024-04-24 17:09:44.114586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.802 17:09:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:35.802 17:09:44 -- common/autotest_common.sh@850 -- # return 0 00:05:35.802 17:09:44 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:35.802 17:09:44 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2937788 00:05:35.803 17:09:44 -- event/cpu_locks.sh@85 -- # waitforlisten 2937788 /var/tmp/spdk2.sock 00:05:35.803 17:09:44 -- common/autotest_common.sh@817 -- # '[' -z 2937788 ']' 00:05:35.803 17:09:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.803 17:09:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:35.803 17:09:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.803 17:09:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:35.803 17:09:44 -- common/autotest_common.sh@10 -- # set +x 00:05:35.803 [2024-04-24 17:09:44.804152] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:35.803 [2024-04-24 17:09:44.804197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937788 ] 00:05:35.803 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.803 [2024-04-24 17:09:44.872884] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.803 [2024-04-24 17:09:44.872905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.803 [2024-04-24 17:09:45.015686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.736 17:09:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:36.736 17:09:45 -- common/autotest_common.sh@850 -- # return 0 00:05:36.736 17:09:45 -- event/cpu_locks.sh@87 -- # locks_exist 2937556 00:05:36.736 17:09:45 -- event/cpu_locks.sh@22 -- # lslocks -p 2937556 00:05:36.736 17:09:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.301 lslocks: write error 00:05:37.301 17:09:46 -- event/cpu_locks.sh@89 -- # killprocess 2937556 00:05:37.301 17:09:46 -- common/autotest_common.sh@936 -- # '[' -z 2937556 ']' 00:05:37.302 17:09:46 -- common/autotest_common.sh@940 -- # kill -0 2937556 00:05:37.302 17:09:46 -- common/autotest_common.sh@941 -- # uname 00:05:37.302 17:09:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.302 17:09:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2937556 00:05:37.302 17:09:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.302 17:09:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.302 17:09:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2937556' 00:05:37.302 killing process with pid 2937556 00:05:37.302 17:09:46 -- common/autotest_common.sh@955 -- # kill 2937556 00:05:37.302 17:09:46 -- common/autotest_common.sh@960 -- # wait 2937556 00:05:37.918 17:09:46 -- event/cpu_locks.sh@90 -- # killprocess 2937788 00:05:37.918 17:09:46 -- common/autotest_common.sh@936 -- # '[' -z 2937788 ']' 00:05:37.918 17:09:46 -- common/autotest_common.sh@940 -- # kill -0 2937788 00:05:37.918 17:09:46 -- common/autotest_common.sh@941 -- # uname 00:05:37.918 17:09:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.918 17:09:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2937788 00:05:37.918 17:09:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.918 17:09:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.918 17:09:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2937788' 00:05:37.918 killing process with pid 2937788 00:05:37.918 17:09:47 -- common/autotest_common.sh@955 -- # kill 2937788 00:05:37.918 17:09:47 -- common/autotest_common.sh@960 -- # wait 2937788 00:05:38.176 00:05:38.176 real 0m3.413s 00:05:38.176 user 0m3.617s 00:05:38.176 sys 0m0.932s 00:05:38.176 17:09:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.176 17:09:47 -- common/autotest_common.sh@10 -- # set +x 00:05:38.176 ************************************ 00:05:38.176 END TEST non_locking_app_on_locked_coremask 00:05:38.176 ************************************ 00:05:38.176 17:09:47 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:38.176 17:09:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.176 17:09:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.176 17:09:47 -- common/autotest_common.sh@10 -- # set +x 00:05:38.434 ************************************ 00:05:38.434 START TEST locking_app_on_unlocked_coremask 00:05:38.434 ************************************ 00:05:38.434 17:09:47 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:38.434 17:09:47 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2938280 00:05:38.434 17:09:47 -- event/cpu_locks.sh@99 -- # waitforlisten 2938280 /var/tmp/spdk.sock 00:05:38.434 17:09:47 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:38.434 17:09:47 -- common/autotest_common.sh@817 -- # '[' -z 2938280 ']' 00:05:38.434 17:09:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.434 17:09:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:38.434 17:09:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.434 17:09:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:38.434 17:09:47 -- common/autotest_common.sh@10 -- # set +x 00:05:38.434 [2024-04-24 17:09:47.580780] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:38.434 [2024-04-24 17:09:47.580824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2938280 ] 00:05:38.434 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.434 [2024-04-24 17:09:47.636575] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.434 [2024-04-24 17:09:47.636603] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.692 [2024-04-24 17:09:47.711348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.258 17:09:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:39.258 17:09:48 -- common/autotest_common.sh@850 -- # return 0 00:05:39.258 17:09:48 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2938401 00:05:39.258 17:09:48 -- event/cpu_locks.sh@103 -- # waitforlisten 2938401 /var/tmp/spdk2.sock 00:05:39.258 17:09:48 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:39.258 17:09:48 -- common/autotest_common.sh@817 -- # '[' -z 2938401 ']' 00:05:39.258 17:09:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.258 17:09:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:39.258 17:09:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.258 17:09:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:39.258 17:09:48 -- common/autotest_common.sh@10 -- # set +x 00:05:39.258 [2024-04-24 17:09:48.427139] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:39.258 [2024-04-24 17:09:48.427187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2938401 ] 00:05:39.258 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.258 [2024-04-24 17:09:48.502938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.516 [2024-04-24 17:09:48.646162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.083 17:09:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:40.083 17:09:49 -- common/autotest_common.sh@850 -- # return 0 00:05:40.083 17:09:49 -- event/cpu_locks.sh@105 -- # locks_exist 2938401 00:05:40.083 17:09:49 -- event/cpu_locks.sh@22 -- # lslocks -p 2938401 00:05:40.083 17:09:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.648 lslocks: write error 00:05:40.648 17:09:49 -- event/cpu_locks.sh@107 -- # killprocess 2938280 00:05:40.648 17:09:49 -- common/autotest_common.sh@936 -- # '[' -z 2938280 ']' 00:05:40.648 17:09:49 -- common/autotest_common.sh@940 -- # kill -0 2938280 00:05:40.648 17:09:49 -- common/autotest_common.sh@941 -- # uname 00:05:40.648 17:09:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.648 17:09:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2938280 00:05:40.648 17:09:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.648 17:09:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.648 17:09:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2938280' 00:05:40.648 killing process with pid 2938280 00:05:40.648 17:09:49 -- common/autotest_common.sh@955 -- # kill 2938280 00:05:40.648 17:09:49 -- common/autotest_common.sh@960 -- # wait 2938280 00:05:41.581 17:09:50 -- event/cpu_locks.sh@108 -- # killprocess 2938401 00:05:41.581 17:09:50 -- common/autotest_common.sh@936 -- # '[' -z 2938401 ']' 00:05:41.581 17:09:50 -- common/autotest_common.sh@940 -- # kill -0 2938401 00:05:41.581 17:09:50 -- common/autotest_common.sh@941 -- # uname 00:05:41.581 17:09:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:41.581 17:09:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2938401 00:05:41.581 17:09:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:41.581 17:09:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:41.581 17:09:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2938401' 00:05:41.581 killing process with pid 2938401 00:05:41.581 17:09:50 -- common/autotest_common.sh@955 -- # kill 2938401 00:05:41.581 17:09:50 -- common/autotest_common.sh@960 -- # wait 2938401 00:05:41.840 00:05:41.840 real 0m3.335s 00:05:41.840 user 0m3.565s 00:05:41.840 sys 0m0.917s 00:05:41.840 17:09:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:41.840 17:09:50 -- common/autotest_common.sh@10 -- # set +x 00:05:41.840 ************************************ 00:05:41.840 END TEST locking_app_on_unlocked_coremask 00:05:41.840 ************************************ 00:05:41.840 17:09:50 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:41.840 17:09:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.840 17:09:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.840 17:09:50 -- common/autotest_common.sh@10 -- # set +x 00:05:41.840 ************************************ 00:05:41.840 START TEST locking_app_on_locked_coremask 00:05:41.840 ************************************ 00:05:41.840 17:09:51 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:41.840 17:09:51 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.840 17:09:51 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2938830 00:05:41.840 17:09:51 -- event/cpu_locks.sh@116 -- # waitforlisten 2938830 /var/tmp/spdk.sock 00:05:41.840 17:09:51 -- common/autotest_common.sh@817 -- # '[' -z 2938830 ']' 00:05:41.840 17:09:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.840 17:09:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:41.840 17:09:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.840 17:09:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:41.840 17:09:51 -- common/autotest_common.sh@10 -- # set +x 00:05:41.840 [2024-04-24 17:09:51.054134] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:41.840 [2024-04-24 17:09:51.054174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2938830 ] 00:05:41.840 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.098 [2024-04-24 17:09:51.103735] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.098 [2024-04-24 17:09:51.181191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.664 17:09:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:42.664 17:09:51 -- common/autotest_common.sh@850 -- # return 0 00:05:42.664 17:09:51 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2939032 00:05:42.664 17:09:51 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2939032 /var/tmp/spdk2.sock 00:05:42.664 17:09:51 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:42.664 17:09:51 -- common/autotest_common.sh@638 -- # local es=0 00:05:42.664 17:09:51 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2939032 /var/tmp/spdk2.sock 00:05:42.664 17:09:51 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:42.664 17:09:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:42.664 17:09:51 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:42.664 17:09:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:42.664 17:09:51 -- common/autotest_common.sh@641 -- # waitforlisten 2939032 /var/tmp/spdk2.sock 00:05:42.664 17:09:51 -- common/autotest_common.sh@817 -- # '[' -z 2939032 ']' 00:05:42.664 17:09:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.664 17:09:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:42.664 17:09:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.664 17:09:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:42.664 17:09:51 -- common/autotest_common.sh@10 -- # set +x 00:05:42.664 [2024-04-24 17:09:51.888061] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:42.664 [2024-04-24 17:09:51.888110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939032 ] 00:05:42.664 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.922 [2024-04-24 17:09:51.962870] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2938830 has claimed it. 00:05:42.922 [2024-04-24 17:09:51.962901] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:43.488 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2939032) - No such process 00:05:43.488 ERROR: process (pid: 2939032) is no longer running 00:05:43.488 17:09:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:43.488 17:09:52 -- common/autotest_common.sh@850 -- # return 1 00:05:43.488 17:09:52 -- common/autotest_common.sh@641 -- # es=1 00:05:43.488 17:09:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:43.488 17:09:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:43.488 17:09:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:43.488 17:09:52 -- event/cpu_locks.sh@122 -- # locks_exist 2938830 00:05:43.488 17:09:52 -- event/cpu_locks.sh@22 -- # lslocks -p 2938830 00:05:43.488 17:09:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.746 lslocks: write error 00:05:43.746 17:09:52 -- event/cpu_locks.sh@124 -- # killprocess 2938830 00:05:43.746 17:09:52 -- common/autotest_common.sh@936 -- # '[' -z 2938830 ']' 00:05:43.746 17:09:52 -- common/autotest_common.sh@940 -- # kill -0 2938830 00:05:43.746 17:09:52 -- common/autotest_common.sh@941 -- # uname 00:05:43.746 17:09:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.746 17:09:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2938830 00:05:43.746 17:09:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:43.746 17:09:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:43.746 17:09:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2938830' 00:05:43.746 killing process with pid 2938830 00:05:43.746 17:09:52 -- common/autotest_common.sh@955 -- # kill 2938830 00:05:43.746 17:09:52 -- common/autotest_common.sh@960 -- # wait 2938830 00:05:44.312 00:05:44.312 real 0m2.282s 00:05:44.312 user 0m2.519s 00:05:44.312 sys 0m0.575s 00:05:44.312 17:09:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:44.312 17:09:53 -- common/autotest_common.sh@10 -- # set +x 00:05:44.312 ************************************ 00:05:44.312 END TEST locking_app_on_locked_coremask 00:05:44.312 ************************************ 00:05:44.312 17:09:53 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:44.312 17:09:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.312 17:09:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.312 17:09:53 -- common/autotest_common.sh@10 -- # set +x 00:05:44.312 ************************************ 00:05:44.312 START TEST locking_overlapped_coremask 00:05:44.312 ************************************ 00:05:44.312 17:09:53 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:44.312 17:09:53 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2939299 00:05:44.312 17:09:53 -- event/cpu_locks.sh@133 -- # waitforlisten 2939299 /var/tmp/spdk.sock 00:05:44.312 17:09:53 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:44.312 17:09:53 -- common/autotest_common.sh@817 -- # '[' -z 2939299 ']' 00:05:44.312 17:09:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.312 17:09:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:44.312 17:09:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.312 17:09:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:44.312 17:09:53 -- common/autotest_common.sh@10 -- # set +x 00:05:44.312 [2024-04-24 17:09:53.522712] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:44.312 [2024-04-24 17:09:53.522761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939299 ] 00:05:44.312 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.570 [2024-04-24 17:09:53.577639] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.570 [2024-04-24 17:09:53.651051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.570 [2024-04-24 17:09:53.651149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.570 [2024-04-24 17:09:53.651150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.136 17:09:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:45.136 17:09:54 -- common/autotest_common.sh@850 -- # return 0 00:05:45.136 17:09:54 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2939533 00:05:45.137 17:09:54 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2939533 /var/tmp/spdk2.sock 00:05:45.137 17:09:54 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:45.137 17:09:54 -- common/autotest_common.sh@638 -- # local es=0 00:05:45.137 17:09:54 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2939533 /var/tmp/spdk2.sock 00:05:45.137 17:09:54 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:45.137 17:09:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:45.137 17:09:54 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:45.137 17:09:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:45.137 17:09:54 -- common/autotest_common.sh@641 -- # waitforlisten 2939533 /var/tmp/spdk2.sock 00:05:45.137 17:09:54 -- common/autotest_common.sh@817 -- # '[' -z 2939533 ']' 00:05:45.137 17:09:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.137 17:09:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:45.137 17:09:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.137 17:09:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:45.137 17:09:54 -- common/autotest_common.sh@10 -- # set +x 00:05:45.137 [2024-04-24 17:09:54.344936] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:45.137 [2024-04-24 17:09:54.344980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939533 ] 00:05:45.137 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.395 [2024-04-24 17:09:54.419590] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2939299 has claimed it. 00:05:45.395 [2024-04-24 17:09:54.419628] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:45.960 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2939533) - No such process 00:05:45.960 ERROR: process (pid: 2939533) is no longer running 00:05:45.960 17:09:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:45.960 17:09:54 -- common/autotest_common.sh@850 -- # return 1 00:05:45.960 17:09:54 -- common/autotest_common.sh@641 -- # es=1 00:05:45.960 17:09:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:45.960 17:09:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:45.960 17:09:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:45.960 17:09:54 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:45.960 17:09:54 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:45.960 17:09:54 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:45.960 17:09:54 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:45.960 17:09:54 -- event/cpu_locks.sh@141 -- # killprocess 2939299 00:05:45.960 17:09:54 -- common/autotest_common.sh@936 -- # '[' -z 2939299 ']' 00:05:45.960 17:09:54 -- common/autotest_common.sh@940 -- # kill -0 2939299 00:05:45.960 17:09:54 -- common/autotest_common.sh@941 -- # uname 00:05:45.960 17:09:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.960 17:09:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2939299 00:05:45.960 17:09:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.960 17:09:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.960 17:09:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2939299' 00:05:45.960 killing process with pid 2939299 00:05:45.960 17:09:55 -- common/autotest_common.sh@955 -- # kill 2939299 00:05:45.960 17:09:55 -- common/autotest_common.sh@960 -- # wait 2939299 00:05:46.219 00:05:46.219 real 0m1.876s 00:05:46.219 user 0m5.227s 00:05:46.219 sys 0m0.399s 00:05:46.219 17:09:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.219 17:09:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.219 ************************************ 00:05:46.219 END TEST locking_overlapped_coremask 00:05:46.219 ************************************ 00:05:46.219 17:09:55 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:46.219 17:09:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.219 17:09:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.219 17:09:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.477 ************************************ 00:05:46.477 START TEST locking_overlapped_coremask_via_rpc 00:05:46.477 ************************************ 00:05:46.477 17:09:55 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:46.477 17:09:55 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2939795 00:05:46.477 17:09:55 -- event/cpu_locks.sh@149 -- # waitforlisten 2939795 /var/tmp/spdk.sock 00:05:46.477 17:09:55 -- common/autotest_common.sh@817 -- # '[' -z 2939795 ']' 00:05:46.477 17:09:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.477 17:09:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:46.477 17:09:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.477 17:09:55 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:46.477 17:09:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:46.477 17:09:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.477 [2024-04-24 17:09:55.552757] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:46.477 [2024-04-24 17:09:55.552798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939795 ] 00:05:46.477 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.477 [2024-04-24 17:09:55.607052] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.477 [2024-04-24 17:09:55.607076] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.477 [2024-04-24 17:09:55.685542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.477 [2024-04-24 17:09:55.685637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.477 [2024-04-24 17:09:55.685639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.479 17:09:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:47.479 17:09:56 -- common/autotest_common.sh@850 -- # return 0 00:05:47.479 17:09:56 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2939812 00:05:47.479 17:09:56 -- event/cpu_locks.sh@153 -- # waitforlisten 2939812 /var/tmp/spdk2.sock 00:05:47.479 17:09:56 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:47.479 17:09:56 -- common/autotest_common.sh@817 -- # '[' -z 2939812 ']' 00:05:47.479 17:09:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.479 17:09:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:47.479 17:09:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.479 17:09:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:47.479 17:09:56 -- common/autotest_common.sh@10 -- # set +x 00:05:47.479 [2024-04-24 17:09:56.401034] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:47.479 [2024-04-24 17:09:56.401081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939812 ] 00:05:47.479 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.479 [2024-04-24 17:09:56.473994] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.479 [2024-04-24 17:09:56.474030] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.479 [2024-04-24 17:09:56.617955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.479 [2024-04-24 17:09:56.621869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.479 [2024-04-24 17:09:56.621870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:48.101 17:09:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.101 17:09:57 -- common/autotest_common.sh@850 -- # return 0 00:05:48.101 17:09:57 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:48.101 17:09:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:48.101 17:09:57 -- common/autotest_common.sh@10 -- # set +x 00:05:48.101 17:09:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:48.101 17:09:57 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.101 17:09:57 -- common/autotest_common.sh@638 -- # local es=0 00:05:48.101 17:09:57 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.101 17:09:57 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:48.101 17:09:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:48.101 17:09:57 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:48.101 17:09:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:48.101 17:09:57 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.101 17:09:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:48.101 17:09:57 -- common/autotest_common.sh@10 -- # set +x 00:05:48.101 [2024-04-24 17:09:57.203890] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2939795 has claimed it. 00:05:48.101 request: 00:05:48.101 { 00:05:48.101 "method": "framework_enable_cpumask_locks", 00:05:48.101 "req_id": 1 00:05:48.101 } 00:05:48.101 Got JSON-RPC error response 00:05:48.101 response: 00:05:48.101 { 00:05:48.101 "code": -32603, 00:05:48.101 "message": "Failed to claim CPU core: 2" 00:05:48.101 } 00:05:48.101 17:09:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:48.101 17:09:57 -- common/autotest_common.sh@641 -- # es=1 00:05:48.101 17:09:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:48.101 17:09:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:48.101 17:09:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:48.101 17:09:57 -- event/cpu_locks.sh@158 -- # waitforlisten 2939795 /var/tmp/spdk.sock 00:05:48.101 17:09:57 -- common/autotest_common.sh@817 -- # '[' -z 2939795 ']' 00:05:48.101 17:09:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.101 17:09:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:48.101 17:09:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.101 17:09:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:48.101 17:09:57 -- common/autotest_common.sh@10 -- # set +x 00:05:48.359 17:09:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.359 17:09:57 -- common/autotest_common.sh@850 -- # return 0 00:05:48.359 17:09:57 -- event/cpu_locks.sh@159 -- # waitforlisten 2939812 /var/tmp/spdk2.sock 00:05:48.359 17:09:57 -- common/autotest_common.sh@817 -- # '[' -z 2939812 ']' 00:05:48.359 17:09:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.359 17:09:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:48.359 17:09:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.359 17:09:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:48.359 17:09:57 -- common/autotest_common.sh@10 -- # set +x 00:05:48.359 17:09:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.359 17:09:57 -- common/autotest_common.sh@850 -- # return 0 00:05:48.359 17:09:57 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:48.359 17:09:57 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:48.359 17:09:57 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:48.359 17:09:57 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:48.359 00:05:48.359 real 0m2.077s 00:05:48.359 user 0m0.833s 00:05:48.359 sys 0m0.175s 00:05:48.359 17:09:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:48.359 17:09:57 -- common/autotest_common.sh@10 -- # set +x 00:05:48.359 ************************************ 00:05:48.359 END TEST locking_overlapped_coremask_via_rpc 00:05:48.359 ************************************ 00:05:48.617 17:09:57 -- event/cpu_locks.sh@174 -- # cleanup 00:05:48.617 17:09:57 -- event/cpu_locks.sh@15 -- # [[ -z 2939795 ]] 00:05:48.617 17:09:57 -- event/cpu_locks.sh@15 -- # killprocess 2939795 00:05:48.617 17:09:57 -- common/autotest_common.sh@936 -- # '[' -z 2939795 ']' 00:05:48.617 17:09:57 -- common/autotest_common.sh@940 -- # kill -0 2939795 00:05:48.617 17:09:57 -- common/autotest_common.sh@941 -- # uname 00:05:48.617 17:09:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.617 17:09:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2939795 00:05:48.617 17:09:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.617 17:09:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.617 17:09:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2939795' 00:05:48.617 killing process with pid 2939795 00:05:48.617 17:09:57 -- common/autotest_common.sh@955 -- # kill 2939795 00:05:48.617 17:09:57 -- common/autotest_common.sh@960 -- # wait 2939795 00:05:48.875 17:09:57 -- event/cpu_locks.sh@16 -- # [[ -z 2939812 ]] 00:05:48.875 17:09:57 -- event/cpu_locks.sh@16 -- # killprocess 2939812 00:05:48.875 17:09:57 -- common/autotest_common.sh@936 -- # '[' -z 2939812 ']' 00:05:48.875 17:09:57 -- common/autotest_common.sh@940 -- # kill -0 2939812 00:05:48.875 17:09:58 -- common/autotest_common.sh@941 -- # uname 00:05:48.875 17:09:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.875 17:09:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2939812 00:05:48.875 17:09:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:48.875 17:09:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:48.875 17:09:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2939812' 00:05:48.875 killing process with pid 2939812 00:05:48.875 17:09:58 -- common/autotest_common.sh@955 -- # kill 2939812 00:05:48.875 17:09:58 -- common/autotest_common.sh@960 -- # wait 2939812 00:05:49.441 17:09:58 -- event/cpu_locks.sh@18 -- # rm -f 00:05:49.441 17:09:58 -- event/cpu_locks.sh@1 -- # cleanup 00:05:49.441 17:09:58 -- event/cpu_locks.sh@15 -- # [[ -z 2939795 ]] 00:05:49.441 17:09:58 -- event/cpu_locks.sh@15 -- # killprocess 2939795 00:05:49.441 17:09:58 -- common/autotest_common.sh@936 -- # '[' -z 2939795 ']' 00:05:49.441 17:09:58 -- common/autotest_common.sh@940 -- # kill -0 2939795 00:05:49.441 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2939795) - No such process 00:05:49.442 17:09:58 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2939795 is not found' 00:05:49.442 Process with pid 2939795 is not found 00:05:49.442 17:09:58 -- event/cpu_locks.sh@16 -- # [[ -z 2939812 ]] 00:05:49.442 17:09:58 -- event/cpu_locks.sh@16 -- # killprocess 2939812 00:05:49.442 17:09:58 -- common/autotest_common.sh@936 -- # '[' -z 2939812 ']' 00:05:49.442 17:09:58 -- common/autotest_common.sh@940 -- # kill -0 2939812 00:05:49.442 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2939812) - No such process 00:05:49.442 17:09:58 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2939812 is not found' 00:05:49.442 Process with pid 2939812 is not found 00:05:49.442 17:09:58 -- event/cpu_locks.sh@18 -- # rm -f 00:05:49.442 00:05:49.442 real 0m18.092s 00:05:49.442 user 0m29.831s 00:05:49.442 sys 0m5.167s 00:05:49.442 17:09:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.442 17:09:58 -- common/autotest_common.sh@10 -- # set +x 00:05:49.442 ************************************ 00:05:49.442 END TEST cpu_locks 00:05:49.442 ************************************ 00:05:49.442 00:05:49.442 real 0m44.243s 00:05:49.442 user 1m22.229s 00:05:49.442 sys 0m8.735s 00:05:49.442 17:09:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.442 17:09:58 -- common/autotest_common.sh@10 -- # set +x 00:05:49.442 ************************************ 00:05:49.442 END TEST event 00:05:49.442 ************************************ 00:05:49.442 17:09:58 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:49.442 17:09:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.442 17:09:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.442 17:09:58 -- common/autotest_common.sh@10 -- # set +x 00:05:49.442 ************************************ 00:05:49.442 START TEST thread 00:05:49.442 ************************************ 00:05:49.442 17:09:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:49.442 * Looking for test storage... 00:05:49.442 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:05:49.442 17:09:58 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:49.442 17:09:58 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:49.442 17:09:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.442 17:09:58 -- common/autotest_common.sh@10 -- # set +x 00:05:49.700 ************************************ 00:05:49.700 START TEST thread_poller_perf 00:05:49.700 ************************************ 00:05:49.700 17:09:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:49.700 [2024-04-24 17:09:58.830799] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:49.700 [2024-04-24 17:09:58.830876] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940378 ] 00:05:49.700 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.700 [2024-04-24 17:09:58.889582] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.957 [2024-04-24 17:09:58.960823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.957 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:50.891 ====================================== 00:05:50.891 busy:2107842122 (cyc) 00:05:50.891 total_run_count: 421000 00:05:50.891 tsc_hz: 2100000000 (cyc) 00:05:50.891 ====================================== 00:05:50.891 poller_cost: 5006 (cyc), 2383 (nsec) 00:05:50.891 00:05:50.891 real 0m1.244s 00:05:50.891 user 0m1.169s 00:05:50.891 sys 0m0.070s 00:05:50.891 17:10:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.891 17:10:00 -- common/autotest_common.sh@10 -- # set +x 00:05:50.891 ************************************ 00:05:50.891 END TEST thread_poller_perf 00:05:50.891 ************************************ 00:05:50.891 17:10:00 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:50.891 17:10:00 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:50.891 17:10:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.891 17:10:00 -- common/autotest_common.sh@10 -- # set +x 00:05:51.149 ************************************ 00:05:51.149 START TEST thread_poller_perf 00:05:51.149 ************************************ 00:05:51.149 17:10:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:51.149 [2024-04-24 17:10:00.221193] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:51.149 [2024-04-24 17:10:00.221254] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940639 ] 00:05:51.149 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.149 [2024-04-24 17:10:00.278857] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.149 [2024-04-24 17:10:00.348849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.149 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:52.524 ====================================== 00:05:52.524 busy:2101327474 (cyc) 00:05:52.524 total_run_count: 5587000 00:05:52.524 tsc_hz: 2100000000 (cyc) 00:05:52.524 ====================================== 00:05:52.524 poller_cost: 376 (cyc), 179 (nsec) 00:05:52.524 00:05:52.524 real 0m1.235s 00:05:52.524 user 0m1.162s 00:05:52.524 sys 0m0.068s 00:05:52.524 17:10:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.524 17:10:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.524 ************************************ 00:05:52.524 END TEST thread_poller_perf 00:05:52.524 ************************************ 00:05:52.524 17:10:01 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:52.524 00:05:52.524 real 0m2.879s 00:05:52.524 user 0m2.481s 00:05:52.524 sys 0m0.375s 00:05:52.524 17:10:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.524 17:10:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.524 ************************************ 00:05:52.524 END TEST thread 00:05:52.524 ************************************ 00:05:52.524 17:10:01 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:05:52.524 17:10:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.524 17:10:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.524 17:10:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.524 ************************************ 00:05:52.524 START TEST accel 00:05:52.524 ************************************ 00:05:52.524 17:10:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:05:52.524 * Looking for test storage... 00:05:52.524 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:05:52.524 17:10:01 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:52.524 17:10:01 -- accel/accel.sh@82 -- # get_expected_opcs 00:05:52.524 17:10:01 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:52.524 17:10:01 -- accel/accel.sh@62 -- # spdk_tgt_pid=2940940 00:05:52.524 17:10:01 -- accel/accel.sh@63 -- # waitforlisten 2940940 00:05:52.524 17:10:01 -- common/autotest_common.sh@817 -- # '[' -z 2940940 ']' 00:05:52.524 17:10:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.524 17:10:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:52.524 17:10:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.524 17:10:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:52.524 17:10:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.524 17:10:01 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:52.524 17:10:01 -- accel/accel.sh@61 -- # build_accel_config 00:05:52.524 17:10:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.524 17:10:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.524 17:10:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.524 17:10:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.524 17:10:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.524 17:10:01 -- accel/accel.sh@40 -- # local IFS=, 00:05:52.524 17:10:01 -- accel/accel.sh@41 -- # jq -r . 00:05:52.524 [2024-04-24 17:10:01.749104] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:52.524 [2024-04-24 17:10:01.749155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940940 ] 00:05:52.524 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.783 [2024-04-24 17:10:01.802706] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.783 [2024-04-24 17:10:01.874821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.350 17:10:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:53.350 17:10:02 -- common/autotest_common.sh@850 -- # return 0 00:05:53.350 17:10:02 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:53.350 17:10:02 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:53.350 17:10:02 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:53.350 17:10:02 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:53.350 17:10:02 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:53.350 17:10:02 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:53.350 17:10:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:53.350 17:10:02 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:53.350 17:10:02 -- common/autotest_common.sh@10 -- # set +x 00:05:53.350 17:10:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:53.350 17:10:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # IFS== 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # read -r opc module 00:05:53.350 17:10:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:53.350 17:10:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # IFS== 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # read -r opc module 00:05:53.350 17:10:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:53.350 17:10:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # IFS== 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # read -r opc module 00:05:53.350 17:10:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:53.350 17:10:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # IFS== 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # read -r opc module 00:05:53.350 17:10:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:53.350 17:10:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # IFS== 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # read -r opc module 00:05:53.350 17:10:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:53.350 17:10:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # IFS== 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # read -r opc module 00:05:53.350 17:10:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:53.350 17:10:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # IFS== 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # read -r opc module 00:05:53.350 17:10:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:53.350 17:10:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # IFS== 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # read -r opc module 00:05:53.350 17:10:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:53.350 17:10:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # IFS== 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # read -r opc module 00:05:53.350 17:10:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:53.350 17:10:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # IFS== 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # read -r opc module 00:05:53.350 17:10:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:53.350 17:10:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # IFS== 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # read -r opc module 00:05:53.350 17:10:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:53.350 17:10:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # IFS== 00:05:53.350 17:10:02 -- accel/accel.sh@72 -- # read -r opc module 00:05:53.350 17:10:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:53.350 17:10:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.351 17:10:02 -- accel/accel.sh@72 -- # IFS== 00:05:53.351 17:10:02 -- accel/accel.sh@72 -- # read -r opc module 00:05:53.351 17:10:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:53.351 17:10:02 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.351 17:10:02 -- accel/accel.sh@72 -- # IFS== 00:05:53.351 17:10:02 -- accel/accel.sh@72 -- # read -r opc module 00:05:53.351 17:10:02 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:53.351 17:10:02 -- accel/accel.sh@75 -- # killprocess 2940940 00:05:53.351 17:10:02 -- common/autotest_common.sh@936 -- # '[' -z 2940940 ']' 00:05:53.351 17:10:02 -- common/autotest_common.sh@940 -- # kill -0 2940940 00:05:53.351 17:10:02 -- common/autotest_common.sh@941 -- # uname 00:05:53.351 17:10:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.351 17:10:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2940940 00:05:53.610 17:10:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.610 17:10:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.610 17:10:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2940940' 00:05:53.610 killing process with pid 2940940 00:05:53.610 17:10:02 -- common/autotest_common.sh@955 -- # kill 2940940 00:05:53.610 17:10:02 -- common/autotest_common.sh@960 -- # wait 2940940 00:05:53.869 17:10:02 -- accel/accel.sh@76 -- # trap - ERR 00:05:53.869 17:10:02 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:53.869 17:10:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:53.869 17:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.869 17:10:02 -- common/autotest_common.sh@10 -- # set +x 00:05:53.869 17:10:03 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:05:53.869 17:10:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:53.869 17:10:03 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.869 17:10:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.869 17:10:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.869 17:10:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.869 17:10:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.869 17:10:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.869 17:10:03 -- accel/accel.sh@40 -- # local IFS=, 00:05:53.869 17:10:03 -- accel/accel.sh@41 -- # jq -r . 00:05:54.128 17:10:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.128 17:10:03 -- common/autotest_common.sh@10 -- # set +x 00:05:54.128 17:10:03 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:54.128 17:10:03 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:54.128 17:10:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.128 17:10:03 -- common/autotest_common.sh@10 -- # set +x 00:05:54.128 ************************************ 00:05:54.128 START TEST accel_missing_filename 00:05:54.128 ************************************ 00:05:54.128 17:10:03 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:05:54.128 17:10:03 -- common/autotest_common.sh@638 -- # local es=0 00:05:54.128 17:10:03 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:54.128 17:10:03 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:54.128 17:10:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:54.128 17:10:03 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:54.128 17:10:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:54.128 17:10:03 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:54.128 17:10:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:54.128 17:10:03 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.128 17:10:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.128 17:10:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.128 17:10:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.128 17:10:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.128 17:10:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.128 17:10:03 -- accel/accel.sh@40 -- # local IFS=, 00:05:54.128 17:10:03 -- accel/accel.sh@41 -- # jq -r . 00:05:54.128 [2024-04-24 17:10:03.319282] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:54.128 [2024-04-24 17:10:03.319350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941225 ] 00:05:54.128 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.387 [2024-04-24 17:10:03.378650] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.387 [2024-04-24 17:10:03.454410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.387 [2024-04-24 17:10:03.495616] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:54.387 [2024-04-24 17:10:03.555780] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:54.646 A filename is required. 00:05:54.646 17:10:03 -- common/autotest_common.sh@641 -- # es=234 00:05:54.646 17:10:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:54.646 17:10:03 -- common/autotest_common.sh@650 -- # es=106 00:05:54.646 17:10:03 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:54.646 17:10:03 -- common/autotest_common.sh@658 -- # es=1 00:05:54.646 17:10:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:54.646 00:05:54.646 real 0m0.359s 00:05:54.646 user 0m0.270s 00:05:54.646 sys 0m0.128s 00:05:54.646 17:10:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.646 17:10:03 -- common/autotest_common.sh@10 -- # set +x 00:05:54.646 ************************************ 00:05:54.646 END TEST accel_missing_filename 00:05:54.646 ************************************ 00:05:54.646 17:10:03 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:54.646 17:10:03 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:54.646 17:10:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.646 17:10:03 -- common/autotest_common.sh@10 -- # set +x 00:05:54.646 ************************************ 00:05:54.646 START TEST accel_compress_verify 00:05:54.646 ************************************ 00:05:54.646 17:10:03 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:54.646 17:10:03 -- common/autotest_common.sh@638 -- # local es=0 00:05:54.646 17:10:03 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:54.646 17:10:03 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:54.646 17:10:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:54.646 17:10:03 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:54.646 17:10:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:54.646 17:10:03 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:54.646 17:10:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:54.646 17:10:03 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.646 17:10:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.646 17:10:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.646 17:10:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.646 17:10:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.646 17:10:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.646 17:10:03 -- accel/accel.sh@40 -- # local IFS=, 00:05:54.646 17:10:03 -- accel/accel.sh@41 -- # jq -r . 00:05:54.646 [2024-04-24 17:10:03.835606] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:54.646 [2024-04-24 17:10:03.835678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941465 ] 00:05:54.646 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.905 [2024-04-24 17:10:03.894576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.905 [2024-04-24 17:10:03.970355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.905 [2024-04-24 17:10:04.011452] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:54.905 [2024-04-24 17:10:04.070080] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:55.164 00:05:55.164 Compression does not support the verify option, aborting. 00:05:55.164 17:10:04 -- common/autotest_common.sh@641 -- # es=161 00:05:55.164 17:10:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:55.164 17:10:04 -- common/autotest_common.sh@650 -- # es=33 00:05:55.164 17:10:04 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:55.164 17:10:04 -- common/autotest_common.sh@658 -- # es=1 00:05:55.164 17:10:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:55.164 00:05:55.164 real 0m0.356s 00:05:55.164 user 0m0.275s 00:05:55.164 sys 0m0.121s 00:05:55.164 17:10:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.164 17:10:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.164 ************************************ 00:05:55.164 END TEST accel_compress_verify 00:05:55.164 ************************************ 00:05:55.164 17:10:04 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:55.164 17:10:04 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:55.164 17:10:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.164 17:10:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.164 ************************************ 00:05:55.164 START TEST accel_wrong_workload 00:05:55.164 ************************************ 00:05:55.164 17:10:04 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:05:55.164 17:10:04 -- common/autotest_common.sh@638 -- # local es=0 00:05:55.164 17:10:04 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:55.164 17:10:04 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:55.164 17:10:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:55.164 17:10:04 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:55.164 17:10:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:55.164 17:10:04 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:05:55.164 17:10:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:55.164 17:10:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.164 17:10:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.164 17:10:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.164 17:10:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.164 17:10:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.164 17:10:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.164 17:10:04 -- accel/accel.sh@40 -- # local IFS=, 00:05:55.164 17:10:04 -- accel/accel.sh@41 -- # jq -r . 00:05:55.164 Unsupported workload type: foobar 00:05:55.164 [2024-04-24 17:10:04.348990] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:55.164 accel_perf options: 00:05:55.164 [-h help message] 00:05:55.164 [-q queue depth per core] 00:05:55.164 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:55.164 [-T number of threads per core 00:05:55.164 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:55.164 [-t time in seconds] 00:05:55.164 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:55.164 [ dif_verify, , dif_generate, dif_generate_copy 00:05:55.164 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:55.164 [-l for compress/decompress workloads, name of uncompressed input file 00:05:55.164 [-S for crc32c workload, use this seed value (default 0) 00:05:55.164 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:55.164 [-f for fill workload, use this BYTE value (default 255) 00:05:55.164 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:55.164 [-y verify result if this switch is on] 00:05:55.164 [-a tasks to allocate per core (default: same value as -q)] 00:05:55.164 Can be used to spread operations across a wider range of memory. 00:05:55.164 17:10:04 -- common/autotest_common.sh@641 -- # es=1 00:05:55.164 17:10:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:55.164 17:10:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:55.164 17:10:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:55.164 00:05:55.164 real 0m0.032s 00:05:55.164 user 0m0.021s 00:05:55.164 sys 0m0.011s 00:05:55.164 17:10:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.164 17:10:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.164 ************************************ 00:05:55.164 END TEST accel_wrong_workload 00:05:55.164 ************************************ 00:05:55.164 Error: writing output failed: Broken pipe 00:05:55.164 17:10:04 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:55.164 17:10:04 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:55.164 17:10:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.164 17:10:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.423 ************************************ 00:05:55.423 START TEST accel_negative_buffers 00:05:55.423 ************************************ 00:05:55.423 17:10:04 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:55.423 17:10:04 -- common/autotest_common.sh@638 -- # local es=0 00:05:55.423 17:10:04 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:55.423 17:10:04 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:55.423 17:10:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:55.423 17:10:04 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:55.423 17:10:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:55.423 17:10:04 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:05:55.423 17:10:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:55.423 17:10:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.423 17:10:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.423 17:10:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.423 17:10:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.423 17:10:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.423 17:10:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.423 17:10:04 -- accel/accel.sh@40 -- # local IFS=, 00:05:55.423 17:10:04 -- accel/accel.sh@41 -- # jq -r . 00:05:55.423 -x option must be non-negative. 00:05:55.423 [2024-04-24 17:10:04.563403] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:55.423 accel_perf options: 00:05:55.423 [-h help message] 00:05:55.423 [-q queue depth per core] 00:05:55.423 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:55.423 [-T number of threads per core 00:05:55.423 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:55.423 [-t time in seconds] 00:05:55.423 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:55.423 [ dif_verify, , dif_generate, dif_generate_copy 00:05:55.423 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:55.423 [-l for compress/decompress workloads, name of uncompressed input file 00:05:55.423 [-S for crc32c workload, use this seed value (default 0) 00:05:55.423 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:55.423 [-f for fill workload, use this BYTE value (default 255) 00:05:55.423 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:55.423 [-y verify result if this switch is on] 00:05:55.423 [-a tasks to allocate per core (default: same value as -q)] 00:05:55.423 Can be used to spread operations across a wider range of memory. 00:05:55.423 17:10:04 -- common/autotest_common.sh@641 -- # es=1 00:05:55.423 17:10:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:55.423 17:10:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:55.423 17:10:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:55.423 00:05:55.423 real 0m0.036s 00:05:55.423 user 0m0.019s 00:05:55.423 sys 0m0.017s 00:05:55.423 17:10:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.423 17:10:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.423 ************************************ 00:05:55.423 END TEST accel_negative_buffers 00:05:55.423 ************************************ 00:05:55.423 Error: writing output failed: Broken pipe 00:05:55.423 17:10:04 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:55.423 17:10:04 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:55.423 17:10:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.423 17:10:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.681 ************************************ 00:05:55.681 START TEST accel_crc32c 00:05:55.681 ************************************ 00:05:55.681 17:10:04 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:55.681 17:10:04 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.681 17:10:04 -- accel/accel.sh@17 -- # local accel_module 00:05:55.681 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.681 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.681 17:10:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:55.681 17:10:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:55.681 17:10:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.681 17:10:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.681 17:10:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.681 17:10:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.681 17:10:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.681 17:10:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.681 17:10:04 -- accel/accel.sh@40 -- # local IFS=, 00:05:55.681 17:10:04 -- accel/accel.sh@41 -- # jq -r . 00:05:55.681 [2024-04-24 17:10:04.751508] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:55.681 [2024-04-24 17:10:04.751563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941561 ] 00:05:55.681 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.681 [2024-04-24 17:10:04.810081] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.681 [2024-04-24 17:10:04.885659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.681 17:10:04 -- accel/accel.sh@20 -- # val= 00:05:55.681 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.681 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.681 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.681 17:10:04 -- accel/accel.sh@20 -- # val= 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val=0x1 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val= 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val= 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val=crc32c 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val=32 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val= 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val=software 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@22 -- # accel_module=software 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val=32 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val=32 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val=1 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val=Yes 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val= 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:55.940 17:10:04 -- accel/accel.sh@20 -- # val= 00:05:55.940 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:55.940 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:56.875 17:10:06 -- accel/accel.sh@20 -- # val= 00:05:56.875 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.875 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:56.875 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:56.875 17:10:06 -- accel/accel.sh@20 -- # val= 00:05:56.875 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.875 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:56.875 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:56.875 17:10:06 -- accel/accel.sh@20 -- # val= 00:05:56.875 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.875 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:56.875 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:56.875 17:10:06 -- accel/accel.sh@20 -- # val= 00:05:56.875 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.875 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:56.875 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:56.875 17:10:06 -- accel/accel.sh@20 -- # val= 00:05:56.875 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.875 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:56.875 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:56.875 17:10:06 -- accel/accel.sh@20 -- # val= 00:05:56.875 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.875 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:56.875 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:56.875 17:10:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.875 17:10:06 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:56.875 17:10:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.875 00:05:56.875 real 0m1.365s 00:05:56.875 user 0m1.263s 00:05:56.875 sys 0m0.115s 00:05:56.875 17:10:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.875 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:05:56.875 ************************************ 00:05:56.875 END TEST accel_crc32c 00:05:56.875 ************************************ 00:05:57.133 17:10:06 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:57.133 17:10:06 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:57.133 17:10:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.133 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:05:57.133 ************************************ 00:05:57.133 START TEST accel_crc32c_C2 00:05:57.133 ************************************ 00:05:57.133 17:10:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:57.133 17:10:06 -- accel/accel.sh@16 -- # local accel_opc 00:05:57.133 17:10:06 -- accel/accel.sh@17 -- # local accel_module 00:05:57.133 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 17:10:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:57.133 17:10:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:57.133 17:10:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.133 17:10:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.133 17:10:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.133 17:10:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.133 17:10:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.133 17:10:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.133 17:10:06 -- accel/accel.sh@40 -- # local IFS=, 00:05:57.133 17:10:06 -- accel/accel.sh@41 -- # jq -r . 00:05:57.133 [2024-04-24 17:10:06.276585] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:57.133 [2024-04-24 17:10:06.276650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941888 ] 00:05:57.133 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.133 [2024-04-24 17:10:06.335474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.393 [2024-04-24 17:10:06.412548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val= 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val= 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val=0x1 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val= 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val= 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val=crc32c 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val=0 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val= 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val=software 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@22 -- # accel_module=software 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val=32 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val=32 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val=1 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val=Yes 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val= 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:57.393 17:10:06 -- accel/accel.sh@20 -- # val= 00:05:57.393 17:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:57.393 17:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:58.769 17:10:07 -- accel/accel.sh@20 -- # val= 00:05:58.769 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.769 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.769 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.769 17:10:07 -- accel/accel.sh@20 -- # val= 00:05:58.769 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.769 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.769 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.769 17:10:07 -- accel/accel.sh@20 -- # val= 00:05:58.769 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.769 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.769 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.769 17:10:07 -- accel/accel.sh@20 -- # val= 00:05:58.769 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.769 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.769 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.769 17:10:07 -- accel/accel.sh@20 -- # val= 00:05:58.769 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.769 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.769 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.769 17:10:07 -- accel/accel.sh@20 -- # val= 00:05:58.769 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.769 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.769 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.769 17:10:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.769 17:10:07 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:58.770 17:10:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.770 00:05:58.770 real 0m1.367s 00:05:58.770 user 0m1.263s 00:05:58.770 sys 0m0.117s 00:05:58.770 17:10:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.770 17:10:07 -- common/autotest_common.sh@10 -- # set +x 00:05:58.770 ************************************ 00:05:58.770 END TEST accel_crc32c_C2 00:05:58.770 ************************************ 00:05:58.770 17:10:07 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:58.770 17:10:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:58.770 17:10:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.770 17:10:07 -- common/autotest_common.sh@10 -- # set +x 00:05:58.770 ************************************ 00:05:58.770 START TEST accel_copy 00:05:58.770 ************************************ 00:05:58.770 17:10:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:05:58.770 17:10:07 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.770 17:10:07 -- accel/accel.sh@17 -- # local accel_module 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:58.770 17:10:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:58.770 17:10:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.770 17:10:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.770 17:10:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.770 17:10:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.770 17:10:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.770 17:10:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.770 17:10:07 -- accel/accel.sh@40 -- # local IFS=, 00:05:58.770 17:10:07 -- accel/accel.sh@41 -- # jq -r . 00:05:58.770 [2024-04-24 17:10:07.794160] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:05:58.770 [2024-04-24 17:10:07.794219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942223 ] 00:05:58.770 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.770 [2024-04-24 17:10:07.851256] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.770 [2024-04-24 17:10:07.921652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val= 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val= 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val=0x1 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val= 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val= 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val=copy 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@23 -- # accel_opc=copy 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val= 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val=software 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@22 -- # accel_module=software 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val=32 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val=32 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val=1 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val=Yes 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val= 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:58.770 17:10:07 -- accel/accel.sh@20 -- # val= 00:05:58.770 17:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:58.770 17:10:07 -- accel/accel.sh@19 -- # read -r var val 00:06:00.146 17:10:09 -- accel/accel.sh@20 -- # val= 00:06:00.146 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.146 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.146 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.146 17:10:09 -- accel/accel.sh@20 -- # val= 00:06:00.146 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.146 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.146 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.146 17:10:09 -- accel/accel.sh@20 -- # val= 00:06:00.146 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.146 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.146 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.146 17:10:09 -- accel/accel.sh@20 -- # val= 00:06:00.146 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.146 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.146 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.146 17:10:09 -- accel/accel.sh@20 -- # val= 00:06:00.146 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.146 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.146 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.146 17:10:09 -- accel/accel.sh@20 -- # val= 00:06:00.146 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.146 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.146 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.146 17:10:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.146 17:10:09 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:00.146 17:10:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.146 00:06:00.146 real 0m1.357s 00:06:00.146 user 0m1.254s 00:06:00.146 sys 0m0.116s 00:06:00.146 17:10:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.146 17:10:09 -- common/autotest_common.sh@10 -- # set +x 00:06:00.146 ************************************ 00:06:00.146 END TEST accel_copy 00:06:00.146 ************************************ 00:06:00.146 17:10:09 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:00.146 17:10:09 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:00.146 17:10:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.146 17:10:09 -- common/autotest_common.sh@10 -- # set +x 00:06:00.146 ************************************ 00:06:00.146 START TEST accel_fill 00:06:00.146 ************************************ 00:06:00.146 17:10:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:00.146 17:10:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.146 17:10:09 -- accel/accel.sh@17 -- # local accel_module 00:06:00.146 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.146 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.146 17:10:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:00.146 17:10:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:00.146 17:10:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.146 17:10:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.146 17:10:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.146 17:10:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.146 17:10:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.146 17:10:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.147 17:10:09 -- accel/accel.sh@40 -- # local IFS=, 00:06:00.147 17:10:09 -- accel/accel.sh@41 -- # jq -r . 00:06:00.147 [2024-04-24 17:10:09.313284] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:00.147 [2024-04-24 17:10:09.313352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942549 ] 00:06:00.147 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.147 [2024-04-24 17:10:09.371312] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.405 [2024-04-24 17:10:09.443295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val= 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val= 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val=0x1 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val= 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val= 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val=fill 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val=0x80 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val= 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val=software 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@22 -- # accel_module=software 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val=64 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val=64 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val=1 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val=Yes 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val= 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 17:10:09 -- accel/accel.sh@20 -- # val= 00:06:00.405 17:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 17:10:09 -- accel/accel.sh@19 -- # read -r var val 00:06:01.788 17:10:10 -- accel/accel.sh@20 -- # val= 00:06:01.788 17:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.788 17:10:10 -- accel/accel.sh@19 -- # IFS=: 00:06:01.788 17:10:10 -- accel/accel.sh@19 -- # read -r var val 00:06:01.788 17:10:10 -- accel/accel.sh@20 -- # val= 00:06:01.789 17:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:10 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:10 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:10 -- accel/accel.sh@20 -- # val= 00:06:01.789 17:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:10 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:10 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:10 -- accel/accel.sh@20 -- # val= 00:06:01.789 17:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:10 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:10 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:10 -- accel/accel.sh@20 -- # val= 00:06:01.789 17:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:10 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:10 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:10 -- accel/accel.sh@20 -- # val= 00:06:01.789 17:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:10 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:10 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.789 17:10:10 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:01.789 17:10:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.789 00:06:01.789 real 0m1.364s 00:06:01.789 user 0m1.259s 00:06:01.789 sys 0m0.117s 00:06:01.789 17:10:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.789 17:10:10 -- common/autotest_common.sh@10 -- # set +x 00:06:01.789 ************************************ 00:06:01.789 END TEST accel_fill 00:06:01.789 ************************************ 00:06:01.789 17:10:10 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:01.789 17:10:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:01.789 17:10:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.789 17:10:10 -- common/autotest_common.sh@10 -- # set +x 00:06:01.789 ************************************ 00:06:01.789 START TEST accel_copy_crc32c 00:06:01.789 ************************************ 00:06:01.789 17:10:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:01.789 17:10:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.789 17:10:10 -- accel/accel.sh@17 -- # local accel_module 00:06:01.789 17:10:10 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:10 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:01.789 17:10:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:01.789 17:10:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.789 17:10:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.789 17:10:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.789 17:10:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.789 17:10:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.789 17:10:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.789 17:10:10 -- accel/accel.sh@40 -- # local IFS=, 00:06:01.789 17:10:10 -- accel/accel.sh@41 -- # jq -r . 00:06:01.789 [2024-04-24 17:10:10.843790] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:01.789 [2024-04-24 17:10:10.843867] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942801 ] 00:06:01.789 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.789 [2024-04-24 17:10:10.901965] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.789 [2024-04-24 17:10:10.980433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val= 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val= 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val=0x1 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val= 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val= 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val=0 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val= 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val=software 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@22 -- # accel_module=software 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val=32 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val=32 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val=1 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val=Yes 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val= 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:01.789 17:10:11 -- accel/accel.sh@20 -- # val= 00:06:01.789 17:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # IFS=: 00:06:01.789 17:10:11 -- accel/accel.sh@19 -- # read -r var val 00:06:03.166 17:10:12 -- accel/accel.sh@20 -- # val= 00:06:03.166 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.166 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.166 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.166 17:10:12 -- accel/accel.sh@20 -- # val= 00:06:03.166 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.166 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.166 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.166 17:10:12 -- accel/accel.sh@20 -- # val= 00:06:03.166 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.166 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.166 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.166 17:10:12 -- accel/accel.sh@20 -- # val= 00:06:03.166 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.166 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.166 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.166 17:10:12 -- accel/accel.sh@20 -- # val= 00:06:03.166 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.166 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.166 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.166 17:10:12 -- accel/accel.sh@20 -- # val= 00:06:03.166 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.166 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.166 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.166 17:10:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.166 17:10:12 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:03.166 17:10:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.166 00:06:03.166 real 0m1.364s 00:06:03.166 user 0m1.250s 00:06:03.166 sys 0m0.119s 00:06:03.166 17:10:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:03.166 17:10:12 -- common/autotest_common.sh@10 -- # set +x 00:06:03.166 ************************************ 00:06:03.166 END TEST accel_copy_crc32c 00:06:03.166 ************************************ 00:06:03.166 17:10:12 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:03.166 17:10:12 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:03.166 17:10:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.166 17:10:12 -- common/autotest_common.sh@10 -- # set +x 00:06:03.166 ************************************ 00:06:03.166 START TEST accel_copy_crc32c_C2 00:06:03.166 ************************************ 00:06:03.166 17:10:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:03.166 17:10:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.166 17:10:12 -- accel/accel.sh@17 -- # local accel_module 00:06:03.167 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.167 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.167 17:10:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:03.167 17:10:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:03.167 17:10:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.167 17:10:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.167 17:10:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.167 17:10:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.167 17:10:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.167 17:10:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.167 17:10:12 -- accel/accel.sh@40 -- # local IFS=, 00:06:03.167 17:10:12 -- accel/accel.sh@41 -- # jq -r . 00:06:03.167 [2024-04-24 17:10:12.357766] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:03.167 [2024-04-24 17:10:12.357836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2943062 ] 00:06:03.167 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.426 [2024-04-24 17:10:12.416896] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.426 [2024-04-24 17:10:12.492566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val= 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val= 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val=0x1 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val= 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val= 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val=0 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val= 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val=software 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@22 -- # accel_module=software 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val=32 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val=32 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val=1 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val=Yes 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val= 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:03.426 17:10:12 -- accel/accel.sh@20 -- # val= 00:06:03.426 17:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # IFS=: 00:06:03.426 17:10:12 -- accel/accel.sh@19 -- # read -r var val 00:06:04.803 17:10:13 -- accel/accel.sh@20 -- # val= 00:06:04.803 17:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.803 17:10:13 -- accel/accel.sh@19 -- # IFS=: 00:06:04.803 17:10:13 -- accel/accel.sh@19 -- # read -r var val 00:06:04.803 17:10:13 -- accel/accel.sh@20 -- # val= 00:06:04.803 17:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.803 17:10:13 -- accel/accel.sh@19 -- # IFS=: 00:06:04.803 17:10:13 -- accel/accel.sh@19 -- # read -r var val 00:06:04.803 17:10:13 -- accel/accel.sh@20 -- # val= 00:06:04.803 17:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.803 17:10:13 -- accel/accel.sh@19 -- # IFS=: 00:06:04.803 17:10:13 -- accel/accel.sh@19 -- # read -r var val 00:06:04.803 17:10:13 -- accel/accel.sh@20 -- # val= 00:06:04.803 17:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.803 17:10:13 -- accel/accel.sh@19 -- # IFS=: 00:06:04.803 17:10:13 -- accel/accel.sh@19 -- # read -r var val 00:06:04.803 17:10:13 -- accel/accel.sh@20 -- # val= 00:06:04.803 17:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.803 17:10:13 -- accel/accel.sh@19 -- # IFS=: 00:06:04.803 17:10:13 -- accel/accel.sh@19 -- # read -r var val 00:06:04.803 17:10:13 -- accel/accel.sh@20 -- # val= 00:06:04.803 17:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.803 17:10:13 -- accel/accel.sh@19 -- # IFS=: 00:06:04.803 17:10:13 -- accel/accel.sh@19 -- # read -r var val 00:06:04.803 17:10:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.803 17:10:13 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:04.803 17:10:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.803 00:06:04.803 real 0m1.360s 00:06:04.803 user 0m1.248s 00:06:04.803 sys 0m0.117s 00:06:04.803 17:10:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.803 17:10:13 -- common/autotest_common.sh@10 -- # set +x 00:06:04.803 ************************************ 00:06:04.803 END TEST accel_copy_crc32c_C2 00:06:04.803 ************************************ 00:06:04.803 17:10:13 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:04.803 17:10:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:04.803 17:10:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.803 17:10:13 -- common/autotest_common.sh@10 -- # set +x 00:06:04.803 ************************************ 00:06:04.803 START TEST accel_dualcast 00:06:04.803 ************************************ 00:06:04.803 17:10:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:04.803 17:10:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.803 17:10:13 -- accel/accel.sh@17 -- # local accel_module 00:06:04.803 17:10:13 -- accel/accel.sh@19 -- # IFS=: 00:06:04.803 17:10:13 -- accel/accel.sh@19 -- # read -r var val 00:06:04.803 17:10:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:04.803 17:10:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:04.803 17:10:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.803 17:10:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.803 17:10:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.803 17:10:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.803 17:10:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.803 17:10:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.803 17:10:13 -- accel/accel.sh@40 -- # local IFS=, 00:06:04.803 17:10:13 -- accel/accel.sh@41 -- # jq -r . 00:06:04.803 [2024-04-24 17:10:13.880157] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:04.803 [2024-04-24 17:10:13.880228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2943316 ] 00:06:04.803 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.803 [2024-04-24 17:10:13.938786] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.803 [2024-04-24 17:10:14.014423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.062 17:10:14 -- accel/accel.sh@20 -- # val= 00:06:05.062 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.062 17:10:14 -- accel/accel.sh@20 -- # val= 00:06:05.062 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.062 17:10:14 -- accel/accel.sh@20 -- # val=0x1 00:06:05.062 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.062 17:10:14 -- accel/accel.sh@20 -- # val= 00:06:05.062 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.062 17:10:14 -- accel/accel.sh@20 -- # val= 00:06:05.062 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.062 17:10:14 -- accel/accel.sh@20 -- # val=dualcast 00:06:05.062 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.062 17:10:14 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.062 17:10:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.062 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.062 17:10:14 -- accel/accel.sh@20 -- # val= 00:06:05.062 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.062 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.063 17:10:14 -- accel/accel.sh@20 -- # val=software 00:06:05.063 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.063 17:10:14 -- accel/accel.sh@22 -- # accel_module=software 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.063 17:10:14 -- accel/accel.sh@20 -- # val=32 00:06:05.063 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.063 17:10:14 -- accel/accel.sh@20 -- # val=32 00:06:05.063 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.063 17:10:14 -- accel/accel.sh@20 -- # val=1 00:06:05.063 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.063 17:10:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.063 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.063 17:10:14 -- accel/accel.sh@20 -- # val=Yes 00:06:05.063 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.063 17:10:14 -- accel/accel.sh@20 -- # val= 00:06:05.063 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.063 17:10:14 -- accel/accel.sh@20 -- # val= 00:06:05.063 17:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # IFS=: 00:06:05.063 17:10:14 -- accel/accel.sh@19 -- # read -r var val 00:06:05.999 17:10:15 -- accel/accel.sh@20 -- # val= 00:06:05.999 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.999 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:05.999 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:05.999 17:10:15 -- accel/accel.sh@20 -- # val= 00:06:05.999 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.999 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:05.999 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:05.999 17:10:15 -- accel/accel.sh@20 -- # val= 00:06:05.999 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.999 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:05.999 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:05.999 17:10:15 -- accel/accel.sh@20 -- # val= 00:06:05.999 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.999 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:05.999 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:05.999 17:10:15 -- accel/accel.sh@20 -- # val= 00:06:05.999 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.999 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:05.999 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:05.999 17:10:15 -- accel/accel.sh@20 -- # val= 00:06:05.999 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.999 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:05.999 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:05.999 17:10:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.999 17:10:15 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:05.999 17:10:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.999 00:06:05.999 real 0m1.362s 00:06:05.999 user 0m1.256s 00:06:05.999 sys 0m0.110s 00:06:05.999 17:10:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.999 17:10:15 -- common/autotest_common.sh@10 -- # set +x 00:06:05.999 ************************************ 00:06:05.999 END TEST accel_dualcast 00:06:05.999 ************************************ 00:06:05.999 17:10:15 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:05.999 17:10:15 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:05.999 17:10:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.999 17:10:15 -- common/autotest_common.sh@10 -- # set +x 00:06:06.258 ************************************ 00:06:06.258 START TEST accel_compare 00:06:06.258 ************************************ 00:06:06.258 17:10:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:06.258 17:10:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.258 17:10:15 -- accel/accel.sh@17 -- # local accel_module 00:06:06.258 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.258 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.258 17:10:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:06.258 17:10:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:06.258 17:10:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.258 17:10:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.258 17:10:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.258 17:10:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.258 17:10:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.258 17:10:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.258 17:10:15 -- accel/accel.sh@40 -- # local IFS=, 00:06:06.258 17:10:15 -- accel/accel.sh@41 -- # jq -r . 00:06:06.258 [2024-04-24 17:10:15.392657] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:06.258 [2024-04-24 17:10:15.392703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2943579 ] 00:06:06.258 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.258 [2024-04-24 17:10:15.446892] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.517 [2024-04-24 17:10:15.518345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.517 17:10:15 -- accel/accel.sh@20 -- # val= 00:06:06.517 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.517 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.517 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.517 17:10:15 -- accel/accel.sh@20 -- # val= 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.518 17:10:15 -- accel/accel.sh@20 -- # val=0x1 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.518 17:10:15 -- accel/accel.sh@20 -- # val= 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.518 17:10:15 -- accel/accel.sh@20 -- # val= 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.518 17:10:15 -- accel/accel.sh@20 -- # val=compare 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.518 17:10:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.518 17:10:15 -- accel/accel.sh@20 -- # val= 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.518 17:10:15 -- accel/accel.sh@20 -- # val=software 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@22 -- # accel_module=software 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.518 17:10:15 -- accel/accel.sh@20 -- # val=32 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.518 17:10:15 -- accel/accel.sh@20 -- # val=32 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.518 17:10:15 -- accel/accel.sh@20 -- # val=1 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.518 17:10:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.518 17:10:15 -- accel/accel.sh@20 -- # val=Yes 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.518 17:10:15 -- accel/accel.sh@20 -- # val= 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:06.518 17:10:15 -- accel/accel.sh@20 -- # val= 00:06:06.518 17:10:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # IFS=: 00:06:06.518 17:10:15 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:16 -- accel/accel.sh@20 -- # val= 00:06:07.896 17:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:16 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:16 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:16 -- accel/accel.sh@20 -- # val= 00:06:07.896 17:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:16 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:16 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:16 -- accel/accel.sh@20 -- # val= 00:06:07.896 17:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:16 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:16 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:16 -- accel/accel.sh@20 -- # val= 00:06:07.896 17:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:16 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:16 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:16 -- accel/accel.sh@20 -- # val= 00:06:07.896 17:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:16 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:16 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:16 -- accel/accel.sh@20 -- # val= 00:06:07.896 17:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:16 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:16 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.896 17:10:16 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:07.896 17:10:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.896 00:06:07.896 real 0m1.349s 00:06:07.896 user 0m1.246s 00:06:07.896 sys 0m0.107s 00:06:07.896 17:10:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.896 17:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:07.896 ************************************ 00:06:07.896 END TEST accel_compare 00:06:07.896 ************************************ 00:06:07.896 17:10:16 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:07.896 17:10:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:07.896 17:10:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.896 17:10:16 -- common/autotest_common.sh@10 -- # set +x 00:06:07.896 ************************************ 00:06:07.896 START TEST accel_xor 00:06:07.896 ************************************ 00:06:07.896 17:10:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:07.896 17:10:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.896 17:10:16 -- accel/accel.sh@17 -- # local accel_module 00:06:07.896 17:10:16 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:16 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:07.896 17:10:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:07.896 17:10:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.896 17:10:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.896 17:10:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.896 17:10:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.896 17:10:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.896 17:10:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.896 17:10:16 -- accel/accel.sh@40 -- # local IFS=, 00:06:07.896 17:10:16 -- accel/accel.sh@41 -- # jq -r . 00:06:07.896 [2024-04-24 17:10:16.902998] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:07.896 [2024-04-24 17:10:16.903057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2943834 ] 00:06:07.896 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.896 [2024-04-24 17:10:16.962874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.896 [2024-04-24 17:10:17.037407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.896 17:10:17 -- accel/accel.sh@20 -- # val= 00:06:07.896 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:17 -- accel/accel.sh@20 -- # val= 00:06:07.896 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:17 -- accel/accel.sh@20 -- # val=0x1 00:06:07.896 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:17 -- accel/accel.sh@20 -- # val= 00:06:07.896 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:17 -- accel/accel.sh@20 -- # val= 00:06:07.896 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:17 -- accel/accel.sh@20 -- # val=xor 00:06:07.896 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:17 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:17 -- accel/accel.sh@20 -- # val=2 00:06:07.896 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.896 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:17 -- accel/accel.sh@20 -- # val= 00:06:07.896 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:17 -- accel/accel.sh@20 -- # val=software 00:06:07.896 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:17 -- accel/accel.sh@22 -- # accel_module=software 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:17 -- accel/accel.sh@20 -- # val=32 00:06:07.896 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:17 -- accel/accel.sh@20 -- # val=32 00:06:07.896 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:17 -- accel/accel.sh@20 -- # val=1 00:06:07.896 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.896 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.896 17:10:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.897 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.897 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.897 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.897 17:10:17 -- accel/accel.sh@20 -- # val=Yes 00:06:07.897 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.897 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.897 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.897 17:10:17 -- accel/accel.sh@20 -- # val= 00:06:07.897 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.897 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.897 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:07.897 17:10:17 -- accel/accel.sh@20 -- # val= 00:06:07.897 17:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.897 17:10:17 -- accel/accel.sh@19 -- # IFS=: 00:06:07.897 17:10:17 -- accel/accel.sh@19 -- # read -r var val 00:06:09.274 17:10:18 -- accel/accel.sh@20 -- # val= 00:06:09.274 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.274 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.274 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.274 17:10:18 -- accel/accel.sh@20 -- # val= 00:06:09.274 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.274 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.274 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.274 17:10:18 -- accel/accel.sh@20 -- # val= 00:06:09.274 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.274 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.274 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.274 17:10:18 -- accel/accel.sh@20 -- # val= 00:06:09.274 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.274 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.274 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.274 17:10:18 -- accel/accel.sh@20 -- # val= 00:06:09.274 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.274 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.274 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.274 17:10:18 -- accel/accel.sh@20 -- # val= 00:06:09.274 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.274 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.274 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.274 17:10:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.274 17:10:18 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:09.274 17:10:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.274 00:06:09.274 real 0m1.359s 00:06:09.274 user 0m1.241s 00:06:09.274 sys 0m0.123s 00:06:09.274 17:10:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.274 17:10:18 -- common/autotest_common.sh@10 -- # set +x 00:06:09.274 ************************************ 00:06:09.274 END TEST accel_xor 00:06:09.274 ************************************ 00:06:09.274 17:10:18 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:09.274 17:10:18 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:09.274 17:10:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.274 17:10:18 -- common/autotest_common.sh@10 -- # set +x 00:06:09.274 ************************************ 00:06:09.274 START TEST accel_xor 00:06:09.274 ************************************ 00:06:09.274 17:10:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:09.274 17:10:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.274 17:10:18 -- accel/accel.sh@17 -- # local accel_module 00:06:09.274 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.274 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.274 17:10:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:09.274 17:10:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:09.274 17:10:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.274 17:10:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.274 17:10:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.274 17:10:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.274 17:10:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.274 17:10:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.274 17:10:18 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.274 17:10:18 -- accel/accel.sh@41 -- # jq -r . 00:06:09.274 [2024-04-24 17:10:18.417877] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:09.274 [2024-04-24 17:10:18.417925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2944093 ] 00:06:09.274 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.274 [2024-04-24 17:10:18.474254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.535 [2024-04-24 17:10:18.549677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val= 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val= 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val=0x1 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val= 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val= 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val=xor 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val=3 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val= 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val=software 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@22 -- # accel_module=software 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val=32 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val=32 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val=1 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val=Yes 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val= 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:09.535 17:10:18 -- accel/accel.sh@20 -- # val= 00:06:09.535 17:10:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # IFS=: 00:06:09.535 17:10:18 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:19 -- accel/accel.sh@20 -- # val= 00:06:10.913 17:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:19 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:19 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:19 -- accel/accel.sh@20 -- # val= 00:06:10.913 17:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:19 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:19 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:19 -- accel/accel.sh@20 -- # val= 00:06:10.913 17:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:19 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:19 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:19 -- accel/accel.sh@20 -- # val= 00:06:10.913 17:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:19 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:19 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:19 -- accel/accel.sh@20 -- # val= 00:06:10.913 17:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:19 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:19 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:19 -- accel/accel.sh@20 -- # val= 00:06:10.913 17:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:19 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:19 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.913 17:10:19 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:10.913 17:10:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.913 00:06:10.913 real 0m1.354s 00:06:10.913 user 0m1.238s 00:06:10.913 sys 0m0.121s 00:06:10.913 17:10:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.913 17:10:19 -- common/autotest_common.sh@10 -- # set +x 00:06:10.913 ************************************ 00:06:10.913 END TEST accel_xor 00:06:10.913 ************************************ 00:06:10.913 17:10:19 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:10.913 17:10:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:10.913 17:10:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.913 17:10:19 -- common/autotest_common.sh@10 -- # set +x 00:06:10.913 ************************************ 00:06:10.913 START TEST accel_dif_verify 00:06:10.913 ************************************ 00:06:10.913 17:10:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:10.913 17:10:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.913 17:10:19 -- accel/accel.sh@17 -- # local accel_module 00:06:10.913 17:10:19 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:19 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:10.913 17:10:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:10.913 17:10:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.913 17:10:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.913 17:10:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.913 17:10:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.913 17:10:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.913 17:10:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.913 17:10:19 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.913 17:10:19 -- accel/accel.sh@41 -- # jq -r . 00:06:10.913 [2024-04-24 17:10:19.939034] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:10.913 [2024-04-24 17:10:19.939093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2944350 ] 00:06:10.913 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.913 [2024-04-24 17:10:19.999569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.913 [2024-04-24 17:10:20.088817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val= 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val= 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val=0x1 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val= 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val= 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val=dif_verify 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val= 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val=software 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@22 -- # accel_module=software 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val=32 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val=32 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val=1 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val=No 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val= 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:10.913 17:10:20 -- accel/accel.sh@20 -- # val= 00:06:10.913 17:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # IFS=: 00:06:10.913 17:10:20 -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 17:10:21 -- accel/accel.sh@20 -- # val= 00:06:12.293 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.293 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.293 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 17:10:21 -- accel/accel.sh@20 -- # val= 00:06:12.293 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.293 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.293 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 17:10:21 -- accel/accel.sh@20 -- # val= 00:06:12.293 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.293 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.293 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 17:10:21 -- accel/accel.sh@20 -- # val= 00:06:12.293 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.293 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.293 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 17:10:21 -- accel/accel.sh@20 -- # val= 00:06:12.293 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.293 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.293 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 17:10:21 -- accel/accel.sh@20 -- # val= 00:06:12.293 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.293 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.293 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 17:10:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.293 17:10:21 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:12.293 17:10:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.293 00:06:12.293 real 0m1.382s 00:06:12.293 user 0m1.274s 00:06:12.293 sys 0m0.122s 00:06:12.293 17:10:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.293 17:10:21 -- common/autotest_common.sh@10 -- # set +x 00:06:12.293 ************************************ 00:06:12.293 END TEST accel_dif_verify 00:06:12.293 ************************************ 00:06:12.293 17:10:21 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:12.293 17:10:21 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:12.293 17:10:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.293 17:10:21 -- common/autotest_common.sh@10 -- # set +x 00:06:12.293 ************************************ 00:06:12.293 START TEST accel_dif_generate 00:06:12.293 ************************************ 00:06:12.293 17:10:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:12.293 17:10:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.293 17:10:21 -- accel/accel.sh@17 -- # local accel_module 00:06:12.293 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.293 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 17:10:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:12.293 17:10:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:12.293 17:10:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.293 17:10:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.293 17:10:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.293 17:10:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.293 17:10:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.293 17:10:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.293 17:10:21 -- accel/accel.sh@40 -- # local IFS=, 00:06:12.293 17:10:21 -- accel/accel.sh@41 -- # jq -r . 00:06:12.293 [2024-04-24 17:10:21.495234] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:12.293 [2024-04-24 17:10:21.495293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2944713 ] 00:06:12.293 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.553 [2024-04-24 17:10:21.555996] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.553 [2024-04-24 17:10:21.631841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val= 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val= 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val=0x1 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val= 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val= 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val=dif_generate 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val= 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val=software 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@22 -- # accel_module=software 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val=32 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val=32 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val=1 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val=No 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val= 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 17:10:21 -- accel/accel.sh@20 -- # val= 00:06:12.553 17:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 17:10:21 -- accel/accel.sh@19 -- # read -r var val 00:06:13.930 17:10:22 -- accel/accel.sh@20 -- # val= 00:06:13.930 17:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.930 17:10:22 -- accel/accel.sh@19 -- # IFS=: 00:06:13.930 17:10:22 -- accel/accel.sh@19 -- # read -r var val 00:06:13.930 17:10:22 -- accel/accel.sh@20 -- # val= 00:06:13.930 17:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.930 17:10:22 -- accel/accel.sh@19 -- # IFS=: 00:06:13.930 17:10:22 -- accel/accel.sh@19 -- # read -r var val 00:06:13.930 17:10:22 -- accel/accel.sh@20 -- # val= 00:06:13.930 17:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.930 17:10:22 -- accel/accel.sh@19 -- # IFS=: 00:06:13.930 17:10:22 -- accel/accel.sh@19 -- # read -r var val 00:06:13.930 17:10:22 -- accel/accel.sh@20 -- # val= 00:06:13.930 17:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.930 17:10:22 -- accel/accel.sh@19 -- # IFS=: 00:06:13.930 17:10:22 -- accel/accel.sh@19 -- # read -r var val 00:06:13.930 17:10:22 -- accel/accel.sh@20 -- # val= 00:06:13.930 17:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.930 17:10:22 -- accel/accel.sh@19 -- # IFS=: 00:06:13.930 17:10:22 -- accel/accel.sh@19 -- # read -r var val 00:06:13.930 17:10:22 -- accel/accel.sh@20 -- # val= 00:06:13.930 17:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.930 17:10:22 -- accel/accel.sh@19 -- # IFS=: 00:06:13.930 17:10:22 -- accel/accel.sh@19 -- # read -r var val 00:06:13.930 17:10:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.930 17:10:22 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:13.930 17:10:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.930 00:06:13.930 real 0m1.367s 00:06:13.930 user 0m1.263s 00:06:13.930 sys 0m0.118s 00:06:13.930 17:10:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.930 17:10:22 -- common/autotest_common.sh@10 -- # set +x 00:06:13.930 ************************************ 00:06:13.930 END TEST accel_dif_generate 00:06:13.930 ************************************ 00:06:13.930 17:10:22 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:13.930 17:10:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:13.930 17:10:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.930 17:10:22 -- common/autotest_common.sh@10 -- # set +x 00:06:13.930 ************************************ 00:06:13.930 START TEST accel_dif_generate_copy 00:06:13.930 ************************************ 00:06:13.930 17:10:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:13.930 17:10:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.930 17:10:22 -- accel/accel.sh@17 -- # local accel_module 00:06:13.930 17:10:22 -- accel/accel.sh@19 -- # IFS=: 00:06:13.930 17:10:22 -- accel/accel.sh@19 -- # read -r var val 00:06:13.930 17:10:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:13.930 17:10:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.930 17:10:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:13.930 17:10:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.930 17:10:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.930 17:10:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.930 17:10:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.930 17:10:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.930 17:10:22 -- accel/accel.sh@40 -- # local IFS=, 00:06:13.930 17:10:22 -- accel/accel.sh@41 -- # jq -r . 00:06:13.930 [2024-04-24 17:10:23.015710] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:13.930 [2024-04-24 17:10:23.015767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2945064 ] 00:06:13.930 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.930 [2024-04-24 17:10:23.072630] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.930 [2024-04-24 17:10:23.143812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val= 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val= 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val=0x1 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val= 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val= 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val= 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val=software 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@22 -- # accel_module=software 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val=32 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val=32 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val=1 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val=No 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val= 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:14.206 17:10:23 -- accel/accel.sh@20 -- # val= 00:06:14.206 17:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # IFS=: 00:06:14.206 17:10:23 -- accel/accel.sh@19 -- # read -r var val 00:06:15.190 17:10:24 -- accel/accel.sh@20 -- # val= 00:06:15.190 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.190 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.190 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.190 17:10:24 -- accel/accel.sh@20 -- # val= 00:06:15.190 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.190 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.190 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.190 17:10:24 -- accel/accel.sh@20 -- # val= 00:06:15.190 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.190 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.190 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.190 17:10:24 -- accel/accel.sh@20 -- # val= 00:06:15.190 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.190 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.190 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.190 17:10:24 -- accel/accel.sh@20 -- # val= 00:06:15.190 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.190 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.190 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.190 17:10:24 -- accel/accel.sh@20 -- # val= 00:06:15.190 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.190 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.190 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.190 17:10:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.190 17:10:24 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:15.190 17:10:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.190 00:06:15.190 real 0m1.354s 00:06:15.190 user 0m1.255s 00:06:15.190 sys 0m0.114s 00:06:15.190 17:10:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.190 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:15.190 ************************************ 00:06:15.190 END TEST accel_dif_generate_copy 00:06:15.190 ************************************ 00:06:15.190 17:10:24 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:15.190 17:10:24 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:15.190 17:10:24 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:15.190 17:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.190 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:15.450 ************************************ 00:06:15.450 START TEST accel_comp 00:06:15.450 ************************************ 00:06:15.450 17:10:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:15.450 17:10:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.450 17:10:24 -- accel/accel.sh@17 -- # local accel_module 00:06:15.450 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.450 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.450 17:10:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:15.450 17:10:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:15.450 17:10:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.450 17:10:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.450 17:10:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.450 17:10:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.450 17:10:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.450 17:10:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.450 17:10:24 -- accel/accel.sh@40 -- # local IFS=, 00:06:15.450 17:10:24 -- accel/accel.sh@41 -- # jq -r . 00:06:15.450 [2024-04-24 17:10:24.546660] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:15.450 [2024-04-24 17:10:24.546714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2945341 ] 00:06:15.450 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.450 [2024-04-24 17:10:24.604901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.450 [2024-04-24 17:10:24.684276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val= 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val= 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val= 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val=0x1 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val= 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val= 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val=compress 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val= 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val=software 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@22 -- # accel_module=software 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val=32 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val=32 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val=1 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val=No 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val= 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:15.710 17:10:24 -- accel/accel.sh@20 -- # val= 00:06:15.710 17:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # IFS=: 00:06:15.710 17:10:24 -- accel/accel.sh@19 -- # read -r var val 00:06:16.647 17:10:25 -- accel/accel.sh@20 -- # val= 00:06:16.647 17:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.647 17:10:25 -- accel/accel.sh@19 -- # IFS=: 00:06:16.647 17:10:25 -- accel/accel.sh@19 -- # read -r var val 00:06:16.647 17:10:25 -- accel/accel.sh@20 -- # val= 00:06:16.647 17:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.647 17:10:25 -- accel/accel.sh@19 -- # IFS=: 00:06:16.647 17:10:25 -- accel/accel.sh@19 -- # read -r var val 00:06:16.647 17:10:25 -- accel/accel.sh@20 -- # val= 00:06:16.647 17:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.647 17:10:25 -- accel/accel.sh@19 -- # IFS=: 00:06:16.647 17:10:25 -- accel/accel.sh@19 -- # read -r var val 00:06:16.647 17:10:25 -- accel/accel.sh@20 -- # val= 00:06:16.647 17:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.647 17:10:25 -- accel/accel.sh@19 -- # IFS=: 00:06:16.647 17:10:25 -- accel/accel.sh@19 -- # read -r var val 00:06:16.647 17:10:25 -- accel/accel.sh@20 -- # val= 00:06:16.647 17:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.647 17:10:25 -- accel/accel.sh@19 -- # IFS=: 00:06:16.647 17:10:25 -- accel/accel.sh@19 -- # read -r var val 00:06:16.647 17:10:25 -- accel/accel.sh@20 -- # val= 00:06:16.647 17:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.647 17:10:25 -- accel/accel.sh@19 -- # IFS=: 00:06:16.647 17:10:25 -- accel/accel.sh@19 -- # read -r var val 00:06:16.647 17:10:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.647 17:10:25 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:16.647 17:10:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.647 00:06:16.647 real 0m1.372s 00:06:16.647 user 0m1.271s 00:06:16.647 sys 0m0.114s 00:06:16.647 17:10:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.647 17:10:25 -- common/autotest_common.sh@10 -- # set +x 00:06:16.647 ************************************ 00:06:16.647 END TEST accel_comp 00:06:16.647 ************************************ 00:06:16.906 17:10:25 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:16.907 17:10:25 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:16.907 17:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.907 17:10:25 -- common/autotest_common.sh@10 -- # set +x 00:06:16.907 ************************************ 00:06:16.907 START TEST accel_decomp 00:06:16.907 ************************************ 00:06:16.907 17:10:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:16.907 17:10:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.907 17:10:26 -- accel/accel.sh@17 -- # local accel_module 00:06:16.907 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:16.907 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:16.907 17:10:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:16.907 17:10:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:16.907 17:10:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.907 17:10:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.907 17:10:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.907 17:10:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.907 17:10:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.907 17:10:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.907 17:10:26 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.907 17:10:26 -- accel/accel.sh@41 -- # jq -r . 00:06:16.907 [2024-04-24 17:10:26.081786] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:16.907 [2024-04-24 17:10:26.081860] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2945592 ] 00:06:16.907 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.907 [2024-04-24 17:10:26.140376] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.166 [2024-04-24 17:10:26.216306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.166 17:10:26 -- accel/accel.sh@20 -- # val= 00:06:17.166 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.166 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.166 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.166 17:10:26 -- accel/accel.sh@20 -- # val= 00:06:17.166 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val= 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val=0x1 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val= 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val= 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val=decompress 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val= 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val=software 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@22 -- # accel_module=software 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val=32 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val=32 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val=1 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val=Yes 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val= 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 17:10:26 -- accel/accel.sh@20 -- # val= 00:06:17.167 17:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 17:10:26 -- accel/accel.sh@19 -- # read -r var val 00:06:18.546 17:10:27 -- accel/accel.sh@20 -- # val= 00:06:18.546 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.546 17:10:27 -- accel/accel.sh@20 -- # val= 00:06:18.546 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.546 17:10:27 -- accel/accel.sh@20 -- # val= 00:06:18.546 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.546 17:10:27 -- accel/accel.sh@20 -- # val= 00:06:18.546 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.546 17:10:27 -- accel/accel.sh@20 -- # val= 00:06:18.546 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.546 17:10:27 -- accel/accel.sh@20 -- # val= 00:06:18.546 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.546 17:10:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.546 17:10:27 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:18.546 17:10:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.546 00:06:18.546 real 0m1.368s 00:06:18.546 user 0m1.265s 00:06:18.546 sys 0m0.117s 00:06:18.546 17:10:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.546 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:06:18.546 ************************************ 00:06:18.546 END TEST accel_decomp 00:06:18.546 ************************************ 00:06:18.546 17:10:27 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:18.546 17:10:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:18.546 17:10:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.546 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:06:18.546 ************************************ 00:06:18.546 START TEST accel_decmop_full 00:06:18.546 ************************************ 00:06:18.546 17:10:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:18.546 17:10:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.546 17:10:27 -- accel/accel.sh@17 -- # local accel_module 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.546 17:10:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:18.546 17:10:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:18.546 17:10:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.546 17:10:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.546 17:10:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.546 17:10:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.546 17:10:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.546 17:10:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.546 17:10:27 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.546 17:10:27 -- accel/accel.sh@41 -- # jq -r . 00:06:18.546 [2024-04-24 17:10:27.611357] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:18.546 [2024-04-24 17:10:27.611428] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2945854 ] 00:06:18.546 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.546 [2024-04-24 17:10:27.669133] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.546 [2024-04-24 17:10:27.743732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.546 17:10:27 -- accel/accel.sh@20 -- # val= 00:06:18.546 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.546 17:10:27 -- accel/accel.sh@20 -- # val= 00:06:18.546 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.546 17:10:27 -- accel/accel.sh@20 -- # val= 00:06:18.546 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.546 17:10:27 -- accel/accel.sh@20 -- # val=0x1 00:06:18.546 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.546 17:10:27 -- accel/accel.sh@20 -- # val= 00:06:18.546 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.546 17:10:27 -- accel/accel.sh@20 -- # val= 00:06:18.546 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.546 17:10:27 -- accel/accel.sh@20 -- # val=decompress 00:06:18.546 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.546 17:10:27 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:18.546 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 17:10:27 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:18.806 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 17:10:27 -- accel/accel.sh@20 -- # val= 00:06:18.806 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 17:10:27 -- accel/accel.sh@20 -- # val=software 00:06:18.806 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 17:10:27 -- accel/accel.sh@22 -- # accel_module=software 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 17:10:27 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:18.806 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 17:10:27 -- accel/accel.sh@20 -- # val=32 00:06:18.806 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 17:10:27 -- accel/accel.sh@20 -- # val=32 00:06:18.806 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 17:10:27 -- accel/accel.sh@20 -- # val=1 00:06:18.806 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 17:10:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.806 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 17:10:27 -- accel/accel.sh@20 -- # val=Yes 00:06:18.806 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 17:10:27 -- accel/accel.sh@20 -- # val= 00:06:18.806 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 17:10:27 -- accel/accel.sh@20 -- # val= 00:06:18.806 17:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 17:10:27 -- accel/accel.sh@19 -- # read -r var val 00:06:19.742 17:10:28 -- accel/accel.sh@20 -- # val= 00:06:19.742 17:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.742 17:10:28 -- accel/accel.sh@19 -- # IFS=: 00:06:19.742 17:10:28 -- accel/accel.sh@19 -- # read -r var val 00:06:19.742 17:10:28 -- accel/accel.sh@20 -- # val= 00:06:19.742 17:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.742 17:10:28 -- accel/accel.sh@19 -- # IFS=: 00:06:19.742 17:10:28 -- accel/accel.sh@19 -- # read -r var val 00:06:19.742 17:10:28 -- accel/accel.sh@20 -- # val= 00:06:19.742 17:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.742 17:10:28 -- accel/accel.sh@19 -- # IFS=: 00:06:19.742 17:10:28 -- accel/accel.sh@19 -- # read -r var val 00:06:19.742 17:10:28 -- accel/accel.sh@20 -- # val= 00:06:19.742 17:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.742 17:10:28 -- accel/accel.sh@19 -- # IFS=: 00:06:19.742 17:10:28 -- accel/accel.sh@19 -- # read -r var val 00:06:19.742 17:10:28 -- accel/accel.sh@20 -- # val= 00:06:19.742 17:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.742 17:10:28 -- accel/accel.sh@19 -- # IFS=: 00:06:19.742 17:10:28 -- accel/accel.sh@19 -- # read -r var val 00:06:19.742 17:10:28 -- accel/accel.sh@20 -- # val= 00:06:19.742 17:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.742 17:10:28 -- accel/accel.sh@19 -- # IFS=: 00:06:19.742 17:10:28 -- accel/accel.sh@19 -- # read -r var val 00:06:19.742 17:10:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.742 17:10:28 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:19.742 17:10:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.742 00:06:19.742 real 0m1.376s 00:06:19.742 user 0m1.267s 00:06:19.742 sys 0m0.121s 00:06:19.742 17:10:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.742 17:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.742 ************************************ 00:06:19.742 END TEST accel_decmop_full 00:06:19.742 ************************************ 00:06:20.002 17:10:28 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:20.002 17:10:28 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:20.002 17:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.002 17:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:20.002 ************************************ 00:06:20.002 START TEST accel_decomp_mcore 00:06:20.002 ************************************ 00:06:20.002 17:10:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:20.002 17:10:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.002 17:10:29 -- accel/accel.sh@17 -- # local accel_module 00:06:20.002 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.002 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.002 17:10:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:20.002 17:10:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:20.002 17:10:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.002 17:10:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.002 17:10:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.002 17:10:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.002 17:10:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.002 17:10:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.002 17:10:29 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.002 17:10:29 -- accel/accel.sh@41 -- # jq -r . 00:06:20.002 [2024-04-24 17:10:29.147194] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:20.002 [2024-04-24 17:10:29.147272] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2946109 ] 00:06:20.002 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.002 [2024-04-24 17:10:29.205633] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.261 [2024-04-24 17:10:29.282782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.261 [2024-04-24 17:10:29.282887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.261 [2024-04-24 17:10:29.282912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.261 [2024-04-24 17:10:29.282913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.261 17:10:29 -- accel/accel.sh@20 -- # val= 00:06:20.261 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.261 17:10:29 -- accel/accel.sh@20 -- # val= 00:06:20.261 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.261 17:10:29 -- accel/accel.sh@20 -- # val= 00:06:20.261 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.261 17:10:29 -- accel/accel.sh@20 -- # val=0xf 00:06:20.261 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.261 17:10:29 -- accel/accel.sh@20 -- # val= 00:06:20.261 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.261 17:10:29 -- accel/accel.sh@20 -- # val= 00:06:20.261 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.261 17:10:29 -- accel/accel.sh@20 -- # val=decompress 00:06:20.261 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.261 17:10:29 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.261 17:10:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.261 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.261 17:10:29 -- accel/accel.sh@20 -- # val= 00:06:20.261 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.261 17:10:29 -- accel/accel.sh@20 -- # val=software 00:06:20.261 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.261 17:10:29 -- accel/accel.sh@22 -- # accel_module=software 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.261 17:10:29 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:20.261 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.261 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.262 17:10:29 -- accel/accel.sh@20 -- # val=32 00:06:20.262 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.262 17:10:29 -- accel/accel.sh@20 -- # val=32 00:06:20.262 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.262 17:10:29 -- accel/accel.sh@20 -- # val=1 00:06:20.262 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.262 17:10:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.262 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.262 17:10:29 -- accel/accel.sh@20 -- # val=Yes 00:06:20.262 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.262 17:10:29 -- accel/accel.sh@20 -- # val= 00:06:20.262 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:20.262 17:10:29 -- accel/accel.sh@20 -- # val= 00:06:20.262 17:10:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # IFS=: 00:06:20.262 17:10:29 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.641 17:10:30 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:21.641 17:10:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.641 00:06:21.641 real 0m1.380s 00:06:21.641 user 0m4.595s 00:06:21.641 sys 0m0.128s 00:06:21.641 17:10:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.641 17:10:30 -- common/autotest_common.sh@10 -- # set +x 00:06:21.641 ************************************ 00:06:21.641 END TEST accel_decomp_mcore 00:06:21.641 ************************************ 00:06:21.641 17:10:30 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:21.641 17:10:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:21.641 17:10:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.641 17:10:30 -- common/autotest_common.sh@10 -- # set +x 00:06:21.641 ************************************ 00:06:21.641 START TEST accel_decomp_full_mcore 00:06:21.641 ************************************ 00:06:21.641 17:10:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:21.641 17:10:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.641 17:10:30 -- accel/accel.sh@17 -- # local accel_module 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:21.641 17:10:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.641 17:10:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.641 17:10:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.641 17:10:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.641 17:10:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.641 17:10:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.641 17:10:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.641 17:10:30 -- accel/accel.sh@41 -- # jq -r . 00:06:21.641 [2024-04-24 17:10:30.668393] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:21.641 [2024-04-24 17:10:30.668435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2946375 ] 00:06:21.641 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.641 [2024-04-24 17:10:30.722962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.641 [2024-04-24 17:10:30.795226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.641 [2024-04-24 17:10:30.795326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.641 [2024-04-24 17:10:30.795390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.641 [2024-04-24 17:10:30.795391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val=0xf 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val=decompress 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:21.641 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.641 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.641 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.642 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.642 17:10:30 -- accel/accel.sh@20 -- # val=software 00:06:21.642 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.642 17:10:30 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.642 17:10:30 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:21.642 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.642 17:10:30 -- accel/accel.sh@20 -- # val=32 00:06:21.642 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.642 17:10:30 -- accel/accel.sh@20 -- # val=32 00:06:21.642 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.642 17:10:30 -- accel/accel.sh@20 -- # val=1 00:06:21.642 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.642 17:10:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.642 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.642 17:10:30 -- accel/accel.sh@20 -- # val=Yes 00:06:21.642 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.642 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.642 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:21.642 17:10:30 -- accel/accel.sh@20 -- # val= 00:06:21.642 17:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # IFS=: 00:06:21.642 17:10:30 -- accel/accel.sh@19 -- # read -r var val 00:06:23.023 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.023 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.023 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.023 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.023 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.023 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.023 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.023 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.023 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.023 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.023 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.023 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.023 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.023 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.023 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.023 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.023 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.023 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.023 17:10:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.023 17:10:32 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:23.023 17:10:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.023 00:06:23.023 real 0m1.370s 00:06:23.023 user 0m4.619s 00:06:23.023 sys 0m0.112s 00:06:23.023 17:10:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:23.023 17:10:32 -- common/autotest_common.sh@10 -- # set +x 00:06:23.023 ************************************ 00:06:23.023 END TEST accel_decomp_full_mcore 00:06:23.023 ************************************ 00:06:23.023 17:10:32 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:23.023 17:10:32 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:23.023 17:10:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.023 17:10:32 -- common/autotest_common.sh@10 -- # set +x 00:06:23.023 ************************************ 00:06:23.023 START TEST accel_decomp_mthread 00:06:23.023 ************************************ 00:06:23.023 17:10:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:23.023 17:10:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.023 17:10:32 -- accel/accel.sh@17 -- # local accel_module 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.023 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.023 17:10:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:23.023 17:10:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:23.023 17:10:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.023 17:10:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.023 17:10:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.023 17:10:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.023 17:10:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.023 17:10:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.023 17:10:32 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.023 17:10:32 -- accel/accel.sh@41 -- # jq -r . 00:06:23.023 [2024-04-24 17:10:32.203588] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:23.023 [2024-04-24 17:10:32.203662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2946632 ] 00:06:23.023 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.023 [2024-04-24 17:10:32.261220] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.283 [2024-04-24 17:10:32.337132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val=0x1 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val=decompress 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val=software 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val=32 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val=32 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val=2 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val=Yes 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:23.284 17:10:32 -- accel/accel.sh@20 -- # val= 00:06:23.284 17:10:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # IFS=: 00:06:23.284 17:10:32 -- accel/accel.sh@19 -- # read -r var val 00:06:24.662 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.662 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.662 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.662 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.662 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.662 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.662 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.662 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.662 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.662 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.662 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.662 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.662 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.662 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.662 17:10:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.662 17:10:33 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:24.662 17:10:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.662 00:06:24.662 real 0m1.371s 00:06:24.662 user 0m1.271s 00:06:24.662 sys 0m0.114s 00:06:24.662 17:10:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.662 17:10:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.662 ************************************ 00:06:24.662 END TEST accel_decomp_mthread 00:06:24.662 ************************************ 00:06:24.662 17:10:33 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:24.662 17:10:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:24.662 17:10:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.662 17:10:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.662 ************************************ 00:06:24.662 START TEST accel_deomp_full_mthread 00:06:24.662 ************************************ 00:06:24.662 17:10:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:24.662 17:10:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.662 17:10:33 -- accel/accel.sh@17 -- # local accel_module 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.662 17:10:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:24.662 17:10:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:24.662 17:10:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.662 17:10:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.662 17:10:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.662 17:10:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.662 17:10:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.662 17:10:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.662 17:10:33 -- accel/accel.sh@40 -- # local IFS=, 00:06:24.662 17:10:33 -- accel/accel.sh@41 -- # jq -r . 00:06:24.662 [2024-04-24 17:10:33.729689] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:24.662 [2024-04-24 17:10:33.729748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2946898 ] 00:06:24.662 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.662 [2024-04-24 17:10:33.788852] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.662 [2024-04-24 17:10:33.864110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.662 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.662 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.662 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.662 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.662 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.920 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.920 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.920 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.920 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.920 17:10:33 -- accel/accel.sh@20 -- # val=0x1 00:06:24.920 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.920 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.920 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.921 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.921 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.921 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.921 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.921 17:10:33 -- accel/accel.sh@20 -- # val=decompress 00:06:24.921 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.921 17:10:33 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.921 17:10:33 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:24.921 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.921 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.921 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.921 17:10:33 -- accel/accel.sh@20 -- # val=software 00:06:24.921 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.921 17:10:33 -- accel/accel.sh@22 -- # accel_module=software 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.921 17:10:33 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:24.921 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.921 17:10:33 -- accel/accel.sh@20 -- # val=32 00:06:24.921 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.921 17:10:33 -- accel/accel.sh@20 -- # val=32 00:06:24.921 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.921 17:10:33 -- accel/accel.sh@20 -- # val=2 00:06:24.921 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.921 17:10:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.921 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.921 17:10:33 -- accel/accel.sh@20 -- # val=Yes 00:06:24.921 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.921 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.921 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:24.921 17:10:33 -- accel/accel.sh@20 -- # val= 00:06:24.921 17:10:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # IFS=: 00:06:24.921 17:10:33 -- accel/accel.sh@19 -- # read -r var val 00:06:25.856 17:10:35 -- accel/accel.sh@20 -- # val= 00:06:25.856 17:10:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.856 17:10:35 -- accel/accel.sh@19 -- # IFS=: 00:06:25.856 17:10:35 -- accel/accel.sh@19 -- # read -r var val 00:06:25.856 17:10:35 -- accel/accel.sh@20 -- # val= 00:06:25.856 17:10:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.856 17:10:35 -- accel/accel.sh@19 -- # IFS=: 00:06:25.856 17:10:35 -- accel/accel.sh@19 -- # read -r var val 00:06:25.856 17:10:35 -- accel/accel.sh@20 -- # val= 00:06:25.856 17:10:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.856 17:10:35 -- accel/accel.sh@19 -- # IFS=: 00:06:25.856 17:10:35 -- accel/accel.sh@19 -- # read -r var val 00:06:25.856 17:10:35 -- accel/accel.sh@20 -- # val= 00:06:25.856 17:10:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.856 17:10:35 -- accel/accel.sh@19 -- # IFS=: 00:06:25.856 17:10:35 -- accel/accel.sh@19 -- # read -r var val 00:06:25.856 17:10:35 -- accel/accel.sh@20 -- # val= 00:06:25.856 17:10:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.856 17:10:35 -- accel/accel.sh@19 -- # IFS=: 00:06:25.856 17:10:35 -- accel/accel.sh@19 -- # read -r var val 00:06:25.856 17:10:35 -- accel/accel.sh@20 -- # val= 00:06:25.856 17:10:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.856 17:10:35 -- accel/accel.sh@19 -- # IFS=: 00:06:25.856 17:10:35 -- accel/accel.sh@19 -- # read -r var val 00:06:25.856 17:10:35 -- accel/accel.sh@20 -- # val= 00:06:25.856 17:10:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.856 17:10:35 -- accel/accel.sh@19 -- # IFS=: 00:06:25.856 17:10:35 -- accel/accel.sh@19 -- # read -r var val 00:06:25.856 17:10:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.856 17:10:35 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:25.856 17:10:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.856 00:06:25.856 real 0m1.390s 00:06:25.856 user 0m1.275s 00:06:25.856 sys 0m0.127s 00:06:25.856 17:10:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.856 17:10:35 -- common/autotest_common.sh@10 -- # set +x 00:06:25.856 ************************************ 00:06:25.856 END TEST accel_deomp_full_mthread 00:06:25.856 ************************************ 00:06:26.115 17:10:35 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:26.115 17:10:35 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:26.115 17:10:35 -- accel/accel.sh@137 -- # build_accel_config 00:06:26.115 17:10:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:26.115 17:10:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.115 17:10:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.115 17:10:35 -- common/autotest_common.sh@10 -- # set +x 00:06:26.115 17:10:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.115 17:10:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.115 17:10:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.115 17:10:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.115 17:10:35 -- accel/accel.sh@40 -- # local IFS=, 00:06:26.115 17:10:35 -- accel/accel.sh@41 -- # jq -r . 00:06:26.115 ************************************ 00:06:26.115 START TEST accel_dif_functional_tests 00:06:26.115 ************************************ 00:06:26.115 17:10:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:26.115 [2024-04-24 17:10:35.295019] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:26.115 [2024-04-24 17:10:35.295055] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947239 ] 00:06:26.115 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.115 [2024-04-24 17:10:35.347621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.373 [2024-04-24 17:10:35.420805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.373 [2024-04-24 17:10:35.420907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.373 [2024-04-24 17:10:35.420908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.373 00:06:26.373 00:06:26.373 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.373 http://cunit.sourceforge.net/ 00:06:26.373 00:06:26.373 00:06:26.373 Suite: accel_dif 00:06:26.373 Test: verify: DIF generated, GUARD check ...passed 00:06:26.373 Test: verify: DIF generated, APPTAG check ...passed 00:06:26.373 Test: verify: DIF generated, REFTAG check ...passed 00:06:26.373 Test: verify: DIF not generated, GUARD check ...[2024-04-24 17:10:35.489194] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:26.373 [2024-04-24 17:10:35.489235] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:26.373 passed 00:06:26.373 Test: verify: DIF not generated, APPTAG check ...[2024-04-24 17:10:35.489279] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:26.373 [2024-04-24 17:10:35.489293] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:26.373 passed 00:06:26.373 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 17:10:35.489311] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:26.373 [2024-04-24 17:10:35.489326] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:26.373 passed 00:06:26.373 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:26.373 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-24 17:10:35.489364] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:26.373 passed 00:06:26.373 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:26.373 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:26.373 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:26.374 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 17:10:35.489459] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:26.374 passed 00:06:26.374 Test: generate copy: DIF generated, GUARD check ...passed 00:06:26.374 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:26.374 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:26.374 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:26.374 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:26.374 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:26.374 Test: generate copy: iovecs-len validate ...[2024-04-24 17:10:35.489614] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:26.374 passed 00:06:26.374 Test: generate copy: buffer alignment validate ...passed 00:06:26.374 00:06:26.374 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.374 suites 1 1 n/a 0 0 00:06:26.374 tests 20 20 20 0 0 00:06:26.374 asserts 204 204 204 0 n/a 00:06:26.374 00:06:26.374 Elapsed time = 0.000 seconds 00:06:26.632 00:06:26.632 real 0m0.426s 00:06:26.632 user 0m0.603s 00:06:26.632 sys 0m0.134s 00:06:26.632 17:10:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.632 17:10:35 -- common/autotest_common.sh@10 -- # set +x 00:06:26.632 ************************************ 00:06:26.632 END TEST accel_dif_functional_tests 00:06:26.632 ************************************ 00:06:26.632 00:06:26.632 real 0m34.083s 00:06:26.632 user 0m36.193s 00:06:26.632 sys 0m5.429s 00:06:26.632 17:10:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.632 17:10:35 -- common/autotest_common.sh@10 -- # set +x 00:06:26.632 ************************************ 00:06:26.632 END TEST accel 00:06:26.632 ************************************ 00:06:26.632 17:10:35 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:26.632 17:10:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.632 17:10:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.632 17:10:35 -- common/autotest_common.sh@10 -- # set +x 00:06:26.632 ************************************ 00:06:26.632 START TEST accel_rpc 00:06:26.632 ************************************ 00:06:26.632 17:10:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:26.891 * Looking for test storage... 00:06:26.891 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:26.891 17:10:35 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:26.891 17:10:35 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:26.891 17:10:35 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2947441 00:06:26.891 17:10:35 -- accel/accel_rpc.sh@15 -- # waitforlisten 2947441 00:06:26.891 17:10:35 -- common/autotest_common.sh@817 -- # '[' -z 2947441 ']' 00:06:26.891 17:10:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.891 17:10:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:26.891 17:10:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.891 17:10:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:26.891 17:10:35 -- common/autotest_common.sh@10 -- # set +x 00:06:26.891 [2024-04-24 17:10:36.020741] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:26.891 [2024-04-24 17:10:36.020783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947441 ] 00:06:26.891 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.891 [2024-04-24 17:10:36.074916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.149 [2024-04-24 17:10:36.153469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.717 17:10:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:27.717 17:10:36 -- common/autotest_common.sh@850 -- # return 0 00:06:27.717 17:10:36 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:27.717 17:10:36 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:27.717 17:10:36 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:27.717 17:10:36 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:27.717 17:10:36 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:27.717 17:10:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.717 17:10:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.717 17:10:36 -- common/autotest_common.sh@10 -- # set +x 00:06:27.717 ************************************ 00:06:27.717 START TEST accel_assign_opcode 00:06:27.717 ************************************ 00:06:27.717 17:10:36 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:27.717 17:10:36 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:27.717 17:10:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.717 17:10:36 -- common/autotest_common.sh@10 -- # set +x 00:06:27.717 [2024-04-24 17:10:36.927720] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:27.717 17:10:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.717 17:10:36 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:27.717 17:10:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.717 17:10:36 -- common/autotest_common.sh@10 -- # set +x 00:06:27.717 [2024-04-24 17:10:36.935734] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:27.717 17:10:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.717 17:10:36 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:27.717 17:10:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.717 17:10:36 -- common/autotest_common.sh@10 -- # set +x 00:06:27.975 17:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.975 17:10:37 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:27.975 17:10:37 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:27.975 17:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.975 17:10:37 -- accel/accel_rpc.sh@42 -- # grep software 00:06:27.975 17:10:37 -- common/autotest_common.sh@10 -- # set +x 00:06:27.975 17:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.975 software 00:06:27.975 00:06:27.975 real 0m0.231s 00:06:27.975 user 0m0.042s 00:06:27.975 sys 0m0.013s 00:06:27.975 17:10:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.975 17:10:37 -- common/autotest_common.sh@10 -- # set +x 00:06:27.975 ************************************ 00:06:27.975 END TEST accel_assign_opcode 00:06:27.975 ************************************ 00:06:27.975 17:10:37 -- accel/accel_rpc.sh@55 -- # killprocess 2947441 00:06:27.975 17:10:37 -- common/autotest_common.sh@936 -- # '[' -z 2947441 ']' 00:06:27.975 17:10:37 -- common/autotest_common.sh@940 -- # kill -0 2947441 00:06:27.975 17:10:37 -- common/autotest_common.sh@941 -- # uname 00:06:27.975 17:10:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.975 17:10:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2947441 00:06:28.233 17:10:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:28.233 17:10:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:28.233 17:10:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2947441' 00:06:28.233 killing process with pid 2947441 00:06:28.233 17:10:37 -- common/autotest_common.sh@955 -- # kill 2947441 00:06:28.233 17:10:37 -- common/autotest_common.sh@960 -- # wait 2947441 00:06:28.492 00:06:28.492 real 0m1.684s 00:06:28.492 user 0m1.782s 00:06:28.492 sys 0m0.455s 00:06:28.492 17:10:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.492 17:10:37 -- common/autotest_common.sh@10 -- # set +x 00:06:28.492 ************************************ 00:06:28.492 END TEST accel_rpc 00:06:28.492 ************************************ 00:06:28.492 17:10:37 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:28.492 17:10:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.492 17:10:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.492 17:10:37 -- common/autotest_common.sh@10 -- # set +x 00:06:28.493 ************************************ 00:06:28.493 START TEST app_cmdline 00:06:28.493 ************************************ 00:06:28.493 17:10:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:28.751 * Looking for test storage... 00:06:28.751 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:28.751 17:10:37 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:28.751 17:10:37 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2947763 00:06:28.751 17:10:37 -- app/cmdline.sh@18 -- # waitforlisten 2947763 00:06:28.751 17:10:37 -- common/autotest_common.sh@817 -- # '[' -z 2947763 ']' 00:06:28.751 17:10:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.751 17:10:37 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:28.751 17:10:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:28.751 17:10:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.751 17:10:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:28.751 17:10:37 -- common/autotest_common.sh@10 -- # set +x 00:06:28.751 [2024-04-24 17:10:37.860771] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:28.751 [2024-04-24 17:10:37.860815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947763 ] 00:06:28.751 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.751 [2024-04-24 17:10:37.917206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.751 [2024-04-24 17:10:37.995805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.688 17:10:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:29.688 17:10:38 -- common/autotest_common.sh@850 -- # return 0 00:06:29.688 17:10:38 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:29.688 { 00:06:29.688 "version": "SPDK v24.05-pre git sha1 0d1f30fbf", 00:06:29.688 "fields": { 00:06:29.688 "major": 24, 00:06:29.688 "minor": 5, 00:06:29.688 "patch": 0, 00:06:29.688 "suffix": "-pre", 00:06:29.688 "commit": "0d1f30fbf" 00:06:29.688 } 00:06:29.688 } 00:06:29.688 17:10:38 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:29.688 17:10:38 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:29.688 17:10:38 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:29.689 17:10:38 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:29.689 17:10:38 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:29.689 17:10:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.689 17:10:38 -- common/autotest_common.sh@10 -- # set +x 00:06:29.689 17:10:38 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:29.689 17:10:38 -- app/cmdline.sh@26 -- # sort 00:06:29.689 17:10:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.689 17:10:38 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:29.689 17:10:38 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:29.689 17:10:38 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.689 17:10:38 -- common/autotest_common.sh@638 -- # local es=0 00:06:29.689 17:10:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.689 17:10:38 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:29.689 17:10:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:29.689 17:10:38 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:29.689 17:10:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:29.689 17:10:38 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:29.689 17:10:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:29.689 17:10:38 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:29.689 17:10:38 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:06:29.689 17:10:38 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.949 request: 00:06:29.949 { 00:06:29.949 "method": "env_dpdk_get_mem_stats", 00:06:29.949 "req_id": 1 00:06:29.949 } 00:06:29.949 Got JSON-RPC error response 00:06:29.949 response: 00:06:29.949 { 00:06:29.949 "code": -32601, 00:06:29.949 "message": "Method not found" 00:06:29.949 } 00:06:29.949 17:10:39 -- common/autotest_common.sh@641 -- # es=1 00:06:29.949 17:10:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:29.949 17:10:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:29.949 17:10:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:29.949 17:10:39 -- app/cmdline.sh@1 -- # killprocess 2947763 00:06:29.949 17:10:39 -- common/autotest_common.sh@936 -- # '[' -z 2947763 ']' 00:06:29.949 17:10:39 -- common/autotest_common.sh@940 -- # kill -0 2947763 00:06:29.949 17:10:39 -- common/autotest_common.sh@941 -- # uname 00:06:29.949 17:10:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.949 17:10:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2947763 00:06:29.949 17:10:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.949 17:10:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.949 17:10:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2947763' 00:06:29.949 killing process with pid 2947763 00:06:29.949 17:10:39 -- common/autotest_common.sh@955 -- # kill 2947763 00:06:29.949 17:10:39 -- common/autotest_common.sh@960 -- # wait 2947763 00:06:30.208 00:06:30.208 real 0m1.670s 00:06:30.208 user 0m1.976s 00:06:30.208 sys 0m0.409s 00:06:30.208 17:10:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:30.208 17:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:30.208 ************************************ 00:06:30.208 END TEST app_cmdline 00:06:30.208 ************************************ 00:06:30.208 17:10:39 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:30.208 17:10:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.208 17:10:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.208 17:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:30.467 ************************************ 00:06:30.467 START TEST version 00:06:30.467 ************************************ 00:06:30.467 17:10:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:30.467 * Looking for test storage... 00:06:30.467 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:30.467 17:10:39 -- app/version.sh@17 -- # get_header_version major 00:06:30.467 17:10:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:30.467 17:10:39 -- app/version.sh@14 -- # cut -f2 00:06:30.467 17:10:39 -- app/version.sh@14 -- # tr -d '"' 00:06:30.467 17:10:39 -- app/version.sh@17 -- # major=24 00:06:30.467 17:10:39 -- app/version.sh@18 -- # get_header_version minor 00:06:30.467 17:10:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:30.467 17:10:39 -- app/version.sh@14 -- # cut -f2 00:06:30.467 17:10:39 -- app/version.sh@14 -- # tr -d '"' 00:06:30.467 17:10:39 -- app/version.sh@18 -- # minor=5 00:06:30.467 17:10:39 -- app/version.sh@19 -- # get_header_version patch 00:06:30.467 17:10:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:30.467 17:10:39 -- app/version.sh@14 -- # cut -f2 00:06:30.467 17:10:39 -- app/version.sh@14 -- # tr -d '"' 00:06:30.467 17:10:39 -- app/version.sh@19 -- # patch=0 00:06:30.467 17:10:39 -- app/version.sh@20 -- # get_header_version suffix 00:06:30.467 17:10:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:30.467 17:10:39 -- app/version.sh@14 -- # cut -f2 00:06:30.467 17:10:39 -- app/version.sh@14 -- # tr -d '"' 00:06:30.467 17:10:39 -- app/version.sh@20 -- # suffix=-pre 00:06:30.467 17:10:39 -- app/version.sh@22 -- # version=24.5 00:06:30.467 17:10:39 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:30.467 17:10:39 -- app/version.sh@28 -- # version=24.5rc0 00:06:30.467 17:10:39 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:06:30.467 17:10:39 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:30.467 17:10:39 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:30.467 17:10:39 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:30.727 00:06:30.727 real 0m0.154s 00:06:30.727 user 0m0.086s 00:06:30.727 sys 0m0.101s 00:06:30.727 17:10:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:30.727 17:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:30.727 ************************************ 00:06:30.727 END TEST version 00:06:30.727 ************************************ 00:06:30.727 17:10:39 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:30.727 17:10:39 -- spdk/autotest.sh@194 -- # uname -s 00:06:30.727 17:10:39 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:30.727 17:10:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:30.727 17:10:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:30.727 17:10:39 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:30.727 17:10:39 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:30.727 17:10:39 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:30.727 17:10:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:30.727 17:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:30.727 17:10:39 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:30.727 17:10:39 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:30.727 17:10:39 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:30.727 17:10:39 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:30.727 17:10:39 -- spdk/autotest.sh@281 -- # '[' rdma = rdma ']' 00:06:30.727 17:10:39 -- spdk/autotest.sh@282 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:30.727 17:10:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:30.727 17:10:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.727 17:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:30.727 ************************************ 00:06:30.727 START TEST nvmf_rdma 00:06:30.727 ************************************ 00:06:30.727 17:10:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:30.986 * Looking for test storage... 00:06:30.986 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:30.986 17:10:40 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:30.986 17:10:40 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:30.986 17:10:40 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.986 17:10:40 -- nvmf/common.sh@7 -- # uname -s 00:06:30.986 17:10:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.986 17:10:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.986 17:10:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.986 17:10:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.986 17:10:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.986 17:10:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.986 17:10:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.986 17:10:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.986 17:10:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.986 17:10:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.986 17:10:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:30.986 17:10:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:30.986 17:10:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.986 17:10:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.986 17:10:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.986 17:10:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.986 17:10:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:30.986 17:10:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.986 17:10:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.986 17:10:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.986 17:10:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.986 17:10:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.986 17:10:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.986 17:10:40 -- paths/export.sh@5 -- # export PATH 00:06:30.986 17:10:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.986 17:10:40 -- nvmf/common.sh@47 -- # : 0 00:06:30.986 17:10:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:30.986 17:10:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:30.986 17:10:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.986 17:10:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.986 17:10:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.986 17:10:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:30.986 17:10:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:30.986 17:10:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:30.986 17:10:40 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:30.986 17:10:40 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:30.986 17:10:40 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:30.986 17:10:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:30.986 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:06:30.986 17:10:40 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:30.986 17:10:40 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:06:30.986 17:10:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:30.986 17:10:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.986 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:06:30.986 ************************************ 00:06:30.986 START TEST nvmf_example 00:06:30.986 ************************************ 00:06:30.986 17:10:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:06:30.986 * Looking for test storage... 00:06:31.245 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:31.245 17:10:40 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.245 17:10:40 -- nvmf/common.sh@7 -- # uname -s 00:06:31.245 17:10:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.245 17:10:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.245 17:10:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.245 17:10:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.245 17:10:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.245 17:10:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.245 17:10:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.245 17:10:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.245 17:10:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.245 17:10:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.245 17:10:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:31.245 17:10:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:31.245 17:10:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.245 17:10:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.245 17:10:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.245 17:10:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.245 17:10:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:31.245 17:10:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.245 17:10:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.245 17:10:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.246 17:10:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.246 17:10:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.246 17:10:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.246 17:10:40 -- paths/export.sh@5 -- # export PATH 00:06:31.246 17:10:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.246 17:10:40 -- nvmf/common.sh@47 -- # : 0 00:06:31.246 17:10:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:31.246 17:10:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:31.246 17:10:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.246 17:10:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.246 17:10:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.246 17:10:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:31.246 17:10:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:31.246 17:10:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:31.246 17:10:40 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:31.246 17:10:40 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:31.246 17:10:40 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:31.246 17:10:40 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:31.246 17:10:40 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:31.246 17:10:40 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:31.246 17:10:40 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:31.246 17:10:40 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:31.246 17:10:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:31.246 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:06:31.246 17:10:40 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:31.246 17:10:40 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:06:31.246 17:10:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.246 17:10:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:31.246 17:10:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:31.246 17:10:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:31.246 17:10:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.246 17:10:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:31.246 17:10:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.246 17:10:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:31.246 17:10:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:31.246 17:10:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:31.246 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:06:36.516 17:10:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:36.516 17:10:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:36.516 17:10:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:36.516 17:10:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:36.516 17:10:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:36.516 17:10:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:36.516 17:10:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:36.516 17:10:45 -- nvmf/common.sh@295 -- # net_devs=() 00:06:36.516 17:10:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:36.516 17:10:45 -- nvmf/common.sh@296 -- # e810=() 00:06:36.516 17:10:45 -- nvmf/common.sh@296 -- # local -ga e810 00:06:36.516 17:10:45 -- nvmf/common.sh@297 -- # x722=() 00:06:36.516 17:10:45 -- nvmf/common.sh@297 -- # local -ga x722 00:06:36.516 17:10:45 -- nvmf/common.sh@298 -- # mlx=() 00:06:36.516 17:10:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:36.516 17:10:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:36.516 17:10:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:36.516 17:10:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:36.516 17:10:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:36.516 17:10:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:36.516 17:10:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:36.516 17:10:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:36.516 17:10:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:36.516 17:10:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:36.516 17:10:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:36.516 17:10:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:36.516 17:10:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:36.516 17:10:45 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:36.516 17:10:45 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:36.516 17:10:45 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:36.516 17:10:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:36.516 17:10:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:36.516 17:10:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:06:36.516 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:06:36.516 17:10:45 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:36.516 17:10:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:36.516 17:10:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:06:36.516 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:06:36.516 17:10:45 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:36.516 17:10:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:36.516 17:10:45 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:36.516 17:10:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.516 17:10:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:36.516 17:10:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.516 17:10:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:06:36.516 Found net devices under 0000:da:00.0: mlx_0_0 00:06:36.516 17:10:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.516 17:10:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:36.516 17:10:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.516 17:10:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:36.516 17:10:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.516 17:10:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:06:36.516 Found net devices under 0000:da:00.1: mlx_0_1 00:06:36.516 17:10:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.516 17:10:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:36.516 17:10:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:36.516 17:10:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@409 -- # rdma_device_init 00:06:36.516 17:10:45 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:06:36.516 17:10:45 -- nvmf/common.sh@58 -- # uname 00:06:36.516 17:10:45 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:36.516 17:10:45 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:36.516 17:10:45 -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:36.516 17:10:45 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:36.516 17:10:45 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:36.516 17:10:45 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:36.516 17:10:45 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:36.516 17:10:45 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:36.516 17:10:45 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:06:36.516 17:10:45 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:36.516 17:10:45 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:36.516 17:10:45 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:36.516 17:10:45 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:36.516 17:10:45 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:36.516 17:10:45 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:36.516 17:10:45 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:36.516 17:10:45 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:36.516 17:10:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:36.516 17:10:45 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:36.516 17:10:45 -- nvmf/common.sh@105 -- # continue 2 00:06:36.516 17:10:45 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:36.516 17:10:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:36.516 17:10:45 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:36.516 17:10:45 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:36.516 17:10:45 -- nvmf/common.sh@105 -- # continue 2 00:06:36.516 17:10:45 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:36.516 17:10:45 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:36.516 17:10:45 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:36.516 17:10:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:36.516 17:10:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:36.516 17:10:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:36.516 17:10:45 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:36.516 17:10:45 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:36.516 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:36.516 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:06:36.516 altname enp218s0f0np0 00:06:36.516 altname ens818f0np0 00:06:36.516 inet 192.168.100.8/24 scope global mlx_0_0 00:06:36.516 valid_lft forever preferred_lft forever 00:06:36.516 17:10:45 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:36.516 17:10:45 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:36.516 17:10:45 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:36.516 17:10:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:36.516 17:10:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:36.516 17:10:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:36.516 17:10:45 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:36.516 17:10:45 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:36.516 17:10:45 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:36.516 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:36.516 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:06:36.516 altname enp218s0f1np1 00:06:36.516 altname ens818f1np1 00:06:36.516 inet 192.168.100.9/24 scope global mlx_0_1 00:06:36.516 valid_lft forever preferred_lft forever 00:06:36.517 17:10:45 -- nvmf/common.sh@411 -- # return 0 00:06:36.517 17:10:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:36.517 17:10:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:36.517 17:10:45 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:06:36.517 17:10:45 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:06:36.517 17:10:45 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:36.517 17:10:45 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:36.517 17:10:45 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:36.517 17:10:45 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:36.517 17:10:45 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:36.517 17:10:45 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:36.517 17:10:45 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:36.517 17:10:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:36.517 17:10:45 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:36.517 17:10:45 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:36.517 17:10:45 -- nvmf/common.sh@105 -- # continue 2 00:06:36.517 17:10:45 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:36.517 17:10:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:36.517 17:10:45 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:36.517 17:10:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:36.517 17:10:45 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:36.517 17:10:45 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:36.517 17:10:45 -- nvmf/common.sh@105 -- # continue 2 00:06:36.517 17:10:45 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:36.517 17:10:45 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:36.517 17:10:45 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:36.517 17:10:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:36.517 17:10:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:36.517 17:10:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:36.517 17:10:45 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:36.517 17:10:45 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:36.517 17:10:45 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:36.517 17:10:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:36.517 17:10:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:36.517 17:10:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:36.517 17:10:45 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:06:36.517 192.168.100.9' 00:06:36.517 17:10:45 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:06:36.517 192.168.100.9' 00:06:36.517 17:10:45 -- nvmf/common.sh@446 -- # head -n 1 00:06:36.517 17:10:45 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:36.517 17:10:45 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:06:36.517 192.168.100.9' 00:06:36.517 17:10:45 -- nvmf/common.sh@447 -- # head -n 1 00:06:36.517 17:10:45 -- nvmf/common.sh@447 -- # tail -n +2 00:06:36.517 17:10:45 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:36.517 17:10:45 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:06:36.517 17:10:45 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:36.517 17:10:45 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:06:36.517 17:10:45 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:06:36.517 17:10:45 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:06:36.517 17:10:45 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:36.517 17:10:45 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:36.517 17:10:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:36.517 17:10:45 -- common/autotest_common.sh@10 -- # set +x 00:06:36.517 17:10:45 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:06:36.517 17:10:45 -- target/nvmf_example.sh@34 -- # nvmfpid=2951413 00:06:36.517 17:10:45 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:36.517 17:10:45 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:36.517 17:10:45 -- target/nvmf_example.sh@36 -- # waitforlisten 2951413 00:06:36.517 17:10:45 -- common/autotest_common.sh@817 -- # '[' -z 2951413 ']' 00:06:36.517 17:10:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.517 17:10:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:36.517 17:10:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.517 17:10:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:36.517 17:10:45 -- common/autotest_common.sh@10 -- # set +x 00:06:36.517 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.452 17:10:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:37.452 17:10:46 -- common/autotest_common.sh@850 -- # return 0 00:06:37.452 17:10:46 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:37.452 17:10:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:37.452 17:10:46 -- common/autotest_common.sh@10 -- # set +x 00:06:37.452 17:10:46 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:37.452 17:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.452 17:10:46 -- common/autotest_common.sh@10 -- # set +x 00:06:37.740 17:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.740 17:10:46 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:37.740 17:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.740 17:10:46 -- common/autotest_common.sh@10 -- # set +x 00:06:37.740 17:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.740 17:10:46 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:37.740 17:10:46 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:37.740 17:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.740 17:10:46 -- common/autotest_common.sh@10 -- # set +x 00:06:37.740 17:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.740 17:10:46 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:37.740 17:10:46 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:37.740 17:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.740 17:10:46 -- common/autotest_common.sh@10 -- # set +x 00:06:37.740 17:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.740 17:10:46 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:37.740 17:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.740 17:10:46 -- common/autotest_common.sh@10 -- # set +x 00:06:37.740 17:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.740 17:10:46 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:37.740 17:10:46 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:37.740 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.947 Initializing NVMe Controllers 00:06:49.947 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:49.947 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:49.947 Initialization complete. Launching workers. 00:06:49.947 ======================================================== 00:06:49.947 Latency(us) 00:06:49.947 Device Information : IOPS MiB/s Average min max 00:06:49.947 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 25124.81 98.14 2547.22 633.10 11995.23 00:06:49.947 ======================================================== 00:06:49.947 Total : 25124.81 98.14 2547.22 633.10 11995.23 00:06:49.947 00:06:49.947 17:10:58 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:49.947 17:10:58 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:49.947 17:10:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:49.947 17:10:58 -- nvmf/common.sh@117 -- # sync 00:06:49.947 17:10:58 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:06:49.947 17:10:58 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:06:49.947 17:10:58 -- nvmf/common.sh@120 -- # set +e 00:06:49.947 17:10:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:49.947 17:10:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:06:49.947 rmmod nvme_rdma 00:06:49.947 rmmod nvme_fabrics 00:06:49.947 17:10:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:49.947 17:10:58 -- nvmf/common.sh@124 -- # set -e 00:06:49.947 17:10:58 -- nvmf/common.sh@125 -- # return 0 00:06:49.947 17:10:58 -- nvmf/common.sh@478 -- # '[' -n 2951413 ']' 00:06:49.947 17:10:58 -- nvmf/common.sh@479 -- # killprocess 2951413 00:06:49.947 17:10:58 -- common/autotest_common.sh@936 -- # '[' -z 2951413 ']' 00:06:49.947 17:10:58 -- common/autotest_common.sh@940 -- # kill -0 2951413 00:06:49.947 17:10:58 -- common/autotest_common.sh@941 -- # uname 00:06:49.947 17:10:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:49.947 17:10:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2951413 00:06:49.947 17:10:58 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:06:49.947 17:10:58 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:06:49.947 17:10:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2951413' 00:06:49.947 killing process with pid 2951413 00:06:49.947 17:10:58 -- common/autotest_common.sh@955 -- # kill 2951413 00:06:49.947 17:10:58 -- common/autotest_common.sh@960 -- # wait 2951413 00:06:49.947 nvmf threads initialize successfully 00:06:49.947 bdev subsystem init successfully 00:06:49.947 created a nvmf target service 00:06:49.947 create targets's poll groups done 00:06:49.947 all subsystems of target started 00:06:49.947 nvmf target is running 00:06:49.947 all subsystems of target stopped 00:06:49.947 destroy targets's poll groups done 00:06:49.947 destroyed the nvmf target service 00:06:49.947 bdev subsystem finish successfully 00:06:49.947 nvmf threads destroy successfully 00:06:49.947 17:10:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:49.947 17:10:58 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:06:49.947 17:10:58 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:49.947 17:10:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:49.947 17:10:58 -- common/autotest_common.sh@10 -- # set +x 00:06:49.947 00:06:49.947 real 0m18.265s 00:06:49.947 user 0m51.768s 00:06:49.947 sys 0m4.508s 00:06:49.947 17:10:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.948 17:10:58 -- common/autotest_common.sh@10 -- # set +x 00:06:49.948 ************************************ 00:06:49.948 END TEST nvmf_example 00:06:49.948 ************************************ 00:06:49.948 17:10:58 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:06:49.948 17:10:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:49.948 17:10:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.948 17:10:58 -- common/autotest_common.sh@10 -- # set +x 00:06:49.948 ************************************ 00:06:49.948 START TEST nvmf_filesystem 00:06:49.948 ************************************ 00:06:49.948 17:10:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:06:49.948 * Looking for test storage... 00:06:49.948 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:49.948 17:10:58 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:06:49.948 17:10:58 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:49.948 17:10:58 -- common/autotest_common.sh@34 -- # set -e 00:06:49.948 17:10:58 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:49.948 17:10:58 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:49.948 17:10:58 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:06:49.948 17:10:58 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:49.948 17:10:58 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:06:49.948 17:10:58 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:49.948 17:10:58 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:49.948 17:10:58 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:49.948 17:10:58 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:49.948 17:10:58 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:49.948 17:10:58 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:49.948 17:10:58 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:49.948 17:10:58 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:49.948 17:10:58 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:49.948 17:10:58 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:49.948 17:10:58 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:49.948 17:10:58 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:49.948 17:10:58 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:49.948 17:10:58 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:49.948 17:10:58 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:49.948 17:10:58 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:49.948 17:10:58 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:49.948 17:10:58 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:49.948 17:10:58 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:06:49.948 17:10:58 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:49.948 17:10:58 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:49.948 17:10:58 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:49.948 17:10:58 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:49.948 17:10:58 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:49.948 17:10:58 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:49.948 17:10:58 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:49.948 17:10:58 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:49.948 17:10:58 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:49.948 17:10:58 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:49.948 17:10:58 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:49.948 17:10:58 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:49.948 17:10:58 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:49.948 17:10:58 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:49.948 17:10:58 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:49.948 17:10:58 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:49.948 17:10:58 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:06:49.948 17:10:58 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:49.948 17:10:58 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:49.948 17:10:58 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:49.948 17:10:58 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:49.948 17:10:58 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:49.948 17:10:58 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:49.948 17:10:58 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:49.948 17:10:58 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:49.948 17:10:58 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:49.948 17:10:58 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:49.948 17:10:58 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:49.948 17:10:58 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:49.948 17:10:58 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:49.948 17:10:58 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:49.948 17:10:58 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:06:49.948 17:10:58 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:49.948 17:10:58 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:06:49.948 17:10:58 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:06:49.948 17:10:58 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:06:49.948 17:10:58 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:06:49.948 17:10:58 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:06:49.948 17:10:58 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:06:49.948 17:10:58 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:06:49.948 17:10:58 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:06:49.948 17:10:58 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:06:49.948 17:10:58 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:06:49.948 17:10:58 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:06:49.948 17:10:58 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:06:49.948 17:10:58 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:06:49.948 17:10:58 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:06:49.948 17:10:58 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:06:49.948 17:10:58 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:49.948 17:10:58 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:06:49.948 17:10:58 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:06:49.948 17:10:58 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:06:49.948 17:10:58 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:06:49.948 17:10:58 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:06:49.948 17:10:58 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:06:49.948 17:10:58 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:06:49.948 17:10:58 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:06:49.948 17:10:58 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:06:49.948 17:10:58 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:06:49.948 17:10:58 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:06:49.948 17:10:58 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:49.948 17:10:58 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:06:49.948 17:10:58 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:06:49.948 17:10:58 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:06:49.948 17:10:58 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:06:49.948 17:10:58 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:06:49.948 17:10:58 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:06:49.948 17:10:58 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:49.948 17:10:58 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:06:49.948 17:10:58 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:49.948 17:10:58 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:06:49.948 17:10:58 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:49.948 17:10:58 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:49.948 17:10:58 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:49.949 17:10:58 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:49.949 17:10:58 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:49.949 17:10:58 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:49.949 17:10:58 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:06:49.949 17:10:58 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:49.949 #define SPDK_CONFIG_H 00:06:49.949 #define SPDK_CONFIG_APPS 1 00:06:49.949 #define SPDK_CONFIG_ARCH native 00:06:49.949 #undef SPDK_CONFIG_ASAN 00:06:49.949 #undef SPDK_CONFIG_AVAHI 00:06:49.949 #undef SPDK_CONFIG_CET 00:06:49.949 #define SPDK_CONFIG_COVERAGE 1 00:06:49.949 #define SPDK_CONFIG_CROSS_PREFIX 00:06:49.949 #undef SPDK_CONFIG_CRYPTO 00:06:49.949 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:49.949 #undef SPDK_CONFIG_CUSTOMOCF 00:06:49.949 #undef SPDK_CONFIG_DAOS 00:06:49.949 #define SPDK_CONFIG_DAOS_DIR 00:06:49.949 #define SPDK_CONFIG_DEBUG 1 00:06:49.949 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:49.949 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:06:49.949 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:49.949 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:49.949 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:49.949 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:06:49.949 #define SPDK_CONFIG_EXAMPLES 1 00:06:49.949 #undef SPDK_CONFIG_FC 00:06:49.949 #define SPDK_CONFIG_FC_PATH 00:06:49.949 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:49.949 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:49.949 #undef SPDK_CONFIG_FUSE 00:06:49.949 #undef SPDK_CONFIG_FUZZER 00:06:49.949 #define SPDK_CONFIG_FUZZER_LIB 00:06:49.949 #undef SPDK_CONFIG_GOLANG 00:06:49.949 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:49.949 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:49.949 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:49.949 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:49.949 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:49.949 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:49.949 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:49.949 #define SPDK_CONFIG_IDXD 1 00:06:49.949 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:49.949 #undef SPDK_CONFIG_IPSEC_MB 00:06:49.949 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:49.949 #define SPDK_CONFIG_ISAL 1 00:06:49.949 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:49.949 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:49.949 #define SPDK_CONFIG_LIBDIR 00:06:49.949 #undef SPDK_CONFIG_LTO 00:06:49.949 #define SPDK_CONFIG_MAX_LCORES 00:06:49.949 #define SPDK_CONFIG_NVME_CUSE 1 00:06:49.949 #undef SPDK_CONFIG_OCF 00:06:49.949 #define SPDK_CONFIG_OCF_PATH 00:06:49.949 #define SPDK_CONFIG_OPENSSL_PATH 00:06:49.949 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:49.949 #define SPDK_CONFIG_PGO_DIR 00:06:49.949 #undef SPDK_CONFIG_PGO_USE 00:06:49.949 #define SPDK_CONFIG_PREFIX /usr/local 00:06:49.949 #undef SPDK_CONFIG_RAID5F 00:06:49.949 #undef SPDK_CONFIG_RBD 00:06:49.949 #define SPDK_CONFIG_RDMA 1 00:06:49.949 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:49.949 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:49.949 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:49.949 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:49.949 #define SPDK_CONFIG_SHARED 1 00:06:49.949 #undef SPDK_CONFIG_SMA 00:06:49.949 #define SPDK_CONFIG_TESTS 1 00:06:49.949 #undef SPDK_CONFIG_TSAN 00:06:49.949 #define SPDK_CONFIG_UBLK 1 00:06:49.949 #define SPDK_CONFIG_UBSAN 1 00:06:49.949 #undef SPDK_CONFIG_UNIT_TESTS 00:06:49.949 #undef SPDK_CONFIG_URING 00:06:49.949 #define SPDK_CONFIG_URING_PATH 00:06:49.949 #undef SPDK_CONFIG_URING_ZNS 00:06:49.949 #undef SPDK_CONFIG_USDT 00:06:49.949 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:49.949 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:49.949 #undef SPDK_CONFIG_VFIO_USER 00:06:49.949 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:49.949 #define SPDK_CONFIG_VHOST 1 00:06:49.949 #define SPDK_CONFIG_VIRTIO 1 00:06:49.949 #undef SPDK_CONFIG_VTUNE 00:06:49.949 #define SPDK_CONFIG_VTUNE_DIR 00:06:49.949 #define SPDK_CONFIG_WERROR 1 00:06:49.949 #define SPDK_CONFIG_WPDK_DIR 00:06:49.949 #undef SPDK_CONFIG_XNVME 00:06:49.949 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:49.949 17:10:58 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:49.949 17:10:58 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:49.949 17:10:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.949 17:10:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.949 17:10:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.949 17:10:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.949 17:10:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.949 17:10:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.949 17:10:58 -- paths/export.sh@5 -- # export PATH 00:06:49.949 17:10:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.949 17:10:58 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:06:49.949 17:10:58 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:06:49.949 17:10:58 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:06:49.949 17:10:58 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:06:49.949 17:10:58 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:49.949 17:10:58 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:49.949 17:10:58 -- pm/common@67 -- # TEST_TAG=N/A 00:06:49.949 17:10:58 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:06:49.949 17:10:58 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:06:49.949 17:10:58 -- pm/common@71 -- # uname -s 00:06:49.949 17:10:58 -- pm/common@71 -- # PM_OS=Linux 00:06:49.949 17:10:58 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:49.949 17:10:58 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:06:49.949 17:10:58 -- pm/common@76 -- # [[ Linux == Linux ]] 00:06:49.949 17:10:58 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:06:49.949 17:10:58 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:06:49.949 17:10:58 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:49.949 17:10:58 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:49.949 17:10:58 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:06:49.949 17:10:58 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:06:49.949 17:10:58 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:06:49.949 17:10:58 -- common/autotest_common.sh@57 -- # : 0 00:06:49.949 17:10:58 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:49.949 17:10:58 -- common/autotest_common.sh@61 -- # : 0 00:06:49.949 17:10:58 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:49.949 17:10:58 -- common/autotest_common.sh@63 -- # : 0 00:06:49.949 17:10:58 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:49.949 17:10:58 -- common/autotest_common.sh@65 -- # : 1 00:06:49.949 17:10:58 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:49.949 17:10:58 -- common/autotest_common.sh@67 -- # : 0 00:06:49.949 17:10:58 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:49.949 17:10:58 -- common/autotest_common.sh@69 -- # : 00:06:49.949 17:10:58 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:49.949 17:10:58 -- common/autotest_common.sh@71 -- # : 0 00:06:49.949 17:10:58 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:49.949 17:10:58 -- common/autotest_common.sh@73 -- # : 0 00:06:49.949 17:10:58 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:49.949 17:10:58 -- common/autotest_common.sh@75 -- # : 0 00:06:49.949 17:10:58 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:49.949 17:10:58 -- common/autotest_common.sh@77 -- # : 0 00:06:49.949 17:10:58 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:49.949 17:10:58 -- common/autotest_common.sh@79 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:49.950 17:10:58 -- common/autotest_common.sh@81 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:49.950 17:10:58 -- common/autotest_common.sh@83 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:49.950 17:10:58 -- common/autotest_common.sh@85 -- # : 1 00:06:49.950 17:10:58 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:49.950 17:10:58 -- common/autotest_common.sh@87 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:49.950 17:10:58 -- common/autotest_common.sh@89 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:49.950 17:10:58 -- common/autotest_common.sh@91 -- # : 1 00:06:49.950 17:10:58 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:49.950 17:10:58 -- common/autotest_common.sh@93 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:49.950 17:10:58 -- common/autotest_common.sh@95 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:49.950 17:10:58 -- common/autotest_common.sh@97 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:49.950 17:10:58 -- common/autotest_common.sh@99 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:49.950 17:10:58 -- common/autotest_common.sh@101 -- # : rdma 00:06:49.950 17:10:58 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:49.950 17:10:58 -- common/autotest_common.sh@103 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:49.950 17:10:58 -- common/autotest_common.sh@105 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:49.950 17:10:58 -- common/autotest_common.sh@107 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:49.950 17:10:58 -- common/autotest_common.sh@109 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:49.950 17:10:58 -- common/autotest_common.sh@111 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:49.950 17:10:58 -- common/autotest_common.sh@113 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:49.950 17:10:58 -- common/autotest_common.sh@115 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:49.950 17:10:58 -- common/autotest_common.sh@117 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:49.950 17:10:58 -- common/autotest_common.sh@119 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:49.950 17:10:58 -- common/autotest_common.sh@121 -- # : 1 00:06:49.950 17:10:58 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:49.950 17:10:58 -- common/autotest_common.sh@123 -- # : 00:06:49.950 17:10:58 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:49.950 17:10:58 -- common/autotest_common.sh@125 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:49.950 17:10:58 -- common/autotest_common.sh@127 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:49.950 17:10:58 -- common/autotest_common.sh@129 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:49.950 17:10:58 -- common/autotest_common.sh@131 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:49.950 17:10:58 -- common/autotest_common.sh@133 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:49.950 17:10:58 -- common/autotest_common.sh@135 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:49.950 17:10:58 -- common/autotest_common.sh@137 -- # : 00:06:49.950 17:10:58 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:49.950 17:10:58 -- common/autotest_common.sh@139 -- # : true 00:06:49.950 17:10:58 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:49.950 17:10:58 -- common/autotest_common.sh@141 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:49.950 17:10:58 -- common/autotest_common.sh@143 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:49.950 17:10:58 -- common/autotest_common.sh@145 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:49.950 17:10:58 -- common/autotest_common.sh@147 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:49.950 17:10:58 -- common/autotest_common.sh@149 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:49.950 17:10:58 -- common/autotest_common.sh@151 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:49.950 17:10:58 -- common/autotest_common.sh@153 -- # : mlx5 00:06:49.950 17:10:58 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:49.950 17:10:58 -- common/autotest_common.sh@155 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:49.950 17:10:58 -- common/autotest_common.sh@157 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:49.950 17:10:58 -- common/autotest_common.sh@159 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:49.950 17:10:58 -- common/autotest_common.sh@161 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:49.950 17:10:58 -- common/autotest_common.sh@163 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:49.950 17:10:58 -- common/autotest_common.sh@166 -- # : 00:06:49.950 17:10:58 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:49.950 17:10:58 -- common/autotest_common.sh@168 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:49.950 17:10:58 -- common/autotest_common.sh@170 -- # : 0 00:06:49.950 17:10:58 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:49.950 17:10:58 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:06:49.950 17:10:58 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:06:49.950 17:10:58 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:06:49.950 17:10:58 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:06:49.950 17:10:58 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:49.950 17:10:58 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:49.950 17:10:58 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:49.950 17:10:58 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:49.950 17:10:58 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:49.950 17:10:58 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:49.950 17:10:58 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:06:49.950 17:10:58 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:06:49.950 17:10:58 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:49.950 17:10:58 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:49.950 17:10:58 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:49.950 17:10:58 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:49.950 17:10:58 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:49.950 17:10:58 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:49.950 17:10:58 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:49.950 17:10:58 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:49.950 17:10:58 -- common/autotest_common.sh@199 -- # cat 00:06:49.950 17:10:58 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:06:49.950 17:10:58 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:49.950 17:10:58 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:49.950 17:10:58 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:49.950 17:10:58 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:49.950 17:10:58 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:06:49.950 17:10:58 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:06:49.950 17:10:58 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:06:49.950 17:10:58 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:06:49.951 17:10:58 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:06:49.951 17:10:58 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:06:49.951 17:10:58 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:49.951 17:10:58 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:49.951 17:10:58 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:49.951 17:10:58 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:49.951 17:10:58 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:49.951 17:10:58 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:49.951 17:10:58 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:49.951 17:10:58 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:49.951 17:10:58 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:06:49.951 17:10:58 -- common/autotest_common.sh@252 -- # export valgrind= 00:06:49.951 17:10:58 -- common/autotest_common.sh@252 -- # valgrind= 00:06:49.951 17:10:58 -- common/autotest_common.sh@258 -- # uname -s 00:06:49.951 17:10:58 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:06:49.951 17:10:58 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:06:49.951 17:10:58 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:06:49.951 17:10:58 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:06:49.951 17:10:58 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:49.951 17:10:58 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:49.951 17:10:58 -- common/autotest_common.sh@268 -- # MAKE=make 00:06:49.951 17:10:58 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j96 00:06:49.951 17:10:58 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:06:49.951 17:10:58 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:06:49.951 17:10:58 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:06:49.951 17:10:58 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:06:49.951 17:10:58 -- common/autotest_common.sh@289 -- # for i in "$@" 00:06:49.951 17:10:58 -- common/autotest_common.sh@290 -- # case "$i" in 00:06:49.951 17:10:58 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=rdma 00:06:49.951 17:10:58 -- common/autotest_common.sh@307 -- # [[ -z 2953585 ]] 00:06:49.951 17:10:58 -- common/autotest_common.sh@307 -- # kill -0 2953585 00:06:49.951 17:10:58 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:49.951 17:10:58 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:06:49.951 17:10:58 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:06:49.951 17:10:58 -- common/autotest_common.sh@320 -- # local mount target_dir 00:06:49.951 17:10:58 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:06:49.951 17:10:58 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:06:49.951 17:10:58 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:06:49.951 17:10:58 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:06:49.951 17:10:58 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.RfSGmt 00:06:49.951 17:10:58 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:49.951 17:10:58 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:06:49.951 17:10:58 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:06:49.951 17:10:58 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.RfSGmt/tests/target /tmp/spdk.RfSGmt 00:06:49.951 17:10:58 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:06:49.951 17:10:58 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.951 17:10:58 -- common/autotest_common.sh@316 -- # df -T 00:06:49.951 17:10:58 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:06:49.951 17:10:58 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:06:49.951 17:10:58 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:06:49.951 17:10:58 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:06:49.951 17:10:58 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:06:49.951 17:10:58 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:06:49.951 17:10:58 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.951 17:10:58 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:06:49.951 17:10:58 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:06:49.951 17:10:58 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:06:49.951 17:10:58 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:06:49.951 17:10:58 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:06:49.951 17:10:58 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.951 17:10:58 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:06:49.951 17:10:58 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:06:49.951 17:10:58 -- common/autotest_common.sh@351 -- # avails["$mount"]=182630903808 00:06:49.951 17:10:58 -- common/autotest_common.sh@351 -- # sizes["$mount"]=195974299648 00:06:49.951 17:10:58 -- common/autotest_common.sh@352 -- # uses["$mount"]=13343395840 00:06:49.951 17:10:58 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.951 17:10:58 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:49.951 17:10:58 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:49.951 17:10:58 -- common/autotest_common.sh@351 -- # avails["$mount"]=97984536576 00:06:49.951 17:10:58 -- common/autotest_common.sh@351 -- # sizes["$mount"]=97987149824 00:06:49.951 17:10:58 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:06:49.951 17:10:58 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.951 17:10:58 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:49.951 17:10:58 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:49.951 17:10:58 -- common/autotest_common.sh@351 -- # avails["$mount"]=39185473536 00:06:49.951 17:10:58 -- common/autotest_common.sh@351 -- # sizes["$mount"]=39194861568 00:06:49.951 17:10:58 -- common/autotest_common.sh@352 -- # uses["$mount"]=9388032 00:06:49.951 17:10:58 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.951 17:10:58 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:49.951 17:10:58 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:49.951 17:10:58 -- common/autotest_common.sh@351 -- # avails["$mount"]=97985867776 00:06:49.951 17:10:58 -- common/autotest_common.sh@351 -- # sizes["$mount"]=97987149824 00:06:49.951 17:10:58 -- common/autotest_common.sh@352 -- # uses["$mount"]=1282048 00:06:49.951 17:10:58 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.951 17:10:58 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:49.951 17:10:58 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:49.951 17:10:58 -- common/autotest_common.sh@351 -- # avails["$mount"]=19597422592 00:06:49.951 17:10:58 -- common/autotest_common.sh@351 -- # sizes["$mount"]=19597426688 00:06:49.951 17:10:58 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:06:49.951 17:10:58 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:49.951 17:10:58 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:06:49.951 * Looking for test storage... 00:06:49.951 17:10:58 -- common/autotest_common.sh@357 -- # local target_space new_size 00:06:49.951 17:10:58 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:06:49.951 17:10:58 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:49.951 17:10:58 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:49.951 17:10:58 -- common/autotest_common.sh@361 -- # mount=/ 00:06:49.951 17:10:58 -- common/autotest_common.sh@363 -- # target_space=182630903808 00:06:49.951 17:10:58 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:06:49.951 17:10:58 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:06:49.951 17:10:58 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:06:49.951 17:10:58 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:06:49.951 17:10:58 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:06:49.951 17:10:58 -- common/autotest_common.sh@370 -- # new_size=15557988352 00:06:49.951 17:10:58 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:49.951 17:10:58 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:49.951 17:10:58 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:49.951 17:10:58 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:49.951 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:49.951 17:10:58 -- common/autotest_common.sh@378 -- # return 0 00:06:49.951 17:10:58 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:49.951 17:10:58 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:49.951 17:10:58 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:49.951 17:10:58 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:49.951 17:10:58 -- common/autotest_common.sh@1673 -- # true 00:06:49.951 17:10:58 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:49.951 17:10:58 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:49.951 17:10:58 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:49.951 17:10:58 -- common/autotest_common.sh@27 -- # exec 00:06:49.951 17:10:58 -- common/autotest_common.sh@29 -- # exec 00:06:49.951 17:10:58 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:49.951 17:10:58 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:49.951 17:10:58 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:49.951 17:10:58 -- common/autotest_common.sh@18 -- # set -x 00:06:49.951 17:10:58 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.951 17:10:58 -- nvmf/common.sh@7 -- # uname -s 00:06:49.951 17:10:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.951 17:10:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.951 17:10:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.951 17:10:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.951 17:10:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.951 17:10:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.951 17:10:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.951 17:10:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.951 17:10:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.952 17:10:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.952 17:10:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:49.952 17:10:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:49.952 17:10:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.952 17:10:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.952 17:10:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.952 17:10:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.952 17:10:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:49.952 17:10:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.952 17:10:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.952 17:10:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.952 17:10:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.952 17:10:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.952 17:10:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.952 17:10:58 -- paths/export.sh@5 -- # export PATH 00:06:49.952 17:10:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.952 17:10:58 -- nvmf/common.sh@47 -- # : 0 00:06:49.952 17:10:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:49.952 17:10:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:49.952 17:10:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.952 17:10:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.952 17:10:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.952 17:10:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:49.952 17:10:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:49.952 17:10:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:49.952 17:10:58 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:49.952 17:10:58 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:49.952 17:10:58 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:49.952 17:10:58 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:06:49.952 17:10:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.952 17:10:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:49.952 17:10:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:49.952 17:10:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:49.952 17:10:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.952 17:10:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:49.952 17:10:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.952 17:10:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:49.952 17:10:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:49.952 17:10:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:49.952 17:10:58 -- common/autotest_common.sh@10 -- # set +x 00:06:55.321 17:11:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:55.321 17:11:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:55.321 17:11:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:55.321 17:11:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:55.321 17:11:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:55.321 17:11:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:55.321 17:11:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:55.321 17:11:03 -- nvmf/common.sh@295 -- # net_devs=() 00:06:55.321 17:11:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:55.322 17:11:03 -- nvmf/common.sh@296 -- # e810=() 00:06:55.322 17:11:03 -- nvmf/common.sh@296 -- # local -ga e810 00:06:55.322 17:11:03 -- nvmf/common.sh@297 -- # x722=() 00:06:55.322 17:11:03 -- nvmf/common.sh@297 -- # local -ga x722 00:06:55.322 17:11:03 -- nvmf/common.sh@298 -- # mlx=() 00:06:55.322 17:11:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:55.322 17:11:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:55.322 17:11:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:55.322 17:11:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:55.322 17:11:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:55.322 17:11:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:55.322 17:11:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:55.322 17:11:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:55.322 17:11:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:55.322 17:11:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:55.322 17:11:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:55.322 17:11:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:55.322 17:11:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:55.322 17:11:03 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:55.322 17:11:03 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:55.322 17:11:03 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:55.322 17:11:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:55.322 17:11:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.322 17:11:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:06:55.322 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:06:55.322 17:11:03 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:55.322 17:11:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.322 17:11:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:06:55.322 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:06:55.322 17:11:03 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:55.322 17:11:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:55.322 17:11:03 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.322 17:11:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.322 17:11:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:55.322 17:11:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.322 17:11:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:06:55.322 Found net devices under 0000:da:00.0: mlx_0_0 00:06:55.322 17:11:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.322 17:11:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.322 17:11:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.322 17:11:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:55.322 17:11:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.322 17:11:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:06:55.322 Found net devices under 0000:da:00.1: mlx_0_1 00:06:55.322 17:11:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.322 17:11:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:55.322 17:11:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:55.322 17:11:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@409 -- # rdma_device_init 00:06:55.322 17:11:03 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:06:55.322 17:11:03 -- nvmf/common.sh@58 -- # uname 00:06:55.322 17:11:03 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:55.322 17:11:03 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:55.322 17:11:03 -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:55.322 17:11:03 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:55.322 17:11:03 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:55.322 17:11:03 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:55.322 17:11:03 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:55.322 17:11:03 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:55.322 17:11:03 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:06:55.322 17:11:03 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:55.322 17:11:03 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:55.322 17:11:03 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:55.322 17:11:03 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:55.322 17:11:03 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:55.322 17:11:03 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:55.322 17:11:03 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:55.322 17:11:03 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:55.322 17:11:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:55.322 17:11:03 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:55.322 17:11:03 -- nvmf/common.sh@105 -- # continue 2 00:06:55.322 17:11:03 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:55.322 17:11:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:55.322 17:11:03 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:55.322 17:11:03 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:55.322 17:11:03 -- nvmf/common.sh@105 -- # continue 2 00:06:55.322 17:11:03 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:55.322 17:11:03 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:55.322 17:11:03 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:55.322 17:11:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:55.322 17:11:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:55.322 17:11:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:55.322 17:11:03 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:55.322 17:11:03 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:55.322 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:55.322 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:06:55.322 altname enp218s0f0np0 00:06:55.322 altname ens818f0np0 00:06:55.322 inet 192.168.100.8/24 scope global mlx_0_0 00:06:55.322 valid_lft forever preferred_lft forever 00:06:55.322 17:11:03 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:55.322 17:11:03 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:55.322 17:11:03 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:55.322 17:11:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:55.322 17:11:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:55.322 17:11:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:55.322 17:11:03 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:55.322 17:11:03 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:55.322 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:55.322 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:06:55.322 altname enp218s0f1np1 00:06:55.322 altname ens818f1np1 00:06:55.322 inet 192.168.100.9/24 scope global mlx_0_1 00:06:55.322 valid_lft forever preferred_lft forever 00:06:55.322 17:11:03 -- nvmf/common.sh@411 -- # return 0 00:06:55.322 17:11:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:55.322 17:11:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:55.322 17:11:03 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:06:55.322 17:11:03 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:55.322 17:11:03 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:55.322 17:11:03 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:55.322 17:11:03 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:55.322 17:11:03 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:55.322 17:11:03 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:55.322 17:11:03 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:55.322 17:11:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:55.322 17:11:03 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:55.322 17:11:03 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:55.322 17:11:03 -- nvmf/common.sh@105 -- # continue 2 00:06:55.323 17:11:03 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:55.323 17:11:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:55.323 17:11:03 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:55.323 17:11:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:55.323 17:11:03 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:55.323 17:11:03 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:55.323 17:11:03 -- nvmf/common.sh@105 -- # continue 2 00:06:55.323 17:11:03 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:55.323 17:11:03 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:55.323 17:11:03 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:55.323 17:11:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:55.323 17:11:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:55.323 17:11:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:55.323 17:11:03 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:55.323 17:11:03 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:55.323 17:11:03 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:55.323 17:11:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:55.323 17:11:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:55.323 17:11:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:55.323 17:11:03 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:06:55.323 192.168.100.9' 00:06:55.323 17:11:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:06:55.323 192.168.100.9' 00:06:55.323 17:11:03 -- nvmf/common.sh@446 -- # head -n 1 00:06:55.323 17:11:03 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:55.323 17:11:03 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:06:55.323 192.168.100.9' 00:06:55.323 17:11:03 -- nvmf/common.sh@447 -- # tail -n +2 00:06:55.323 17:11:03 -- nvmf/common.sh@447 -- # head -n 1 00:06:55.323 17:11:03 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:55.323 17:11:03 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:06:55.323 17:11:03 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:55.323 17:11:03 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:06:55.323 17:11:03 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:06:55.323 17:11:03 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:06:55.323 17:11:03 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:55.323 17:11:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:55.323 17:11:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.323 17:11:03 -- common/autotest_common.sh@10 -- # set +x 00:06:55.323 ************************************ 00:06:55.323 START TEST nvmf_filesystem_no_in_capsule 00:06:55.323 ************************************ 00:06:55.323 17:11:03 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:06:55.323 17:11:03 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:55.323 17:11:03 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:55.323 17:11:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:55.323 17:11:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:55.323 17:11:03 -- common/autotest_common.sh@10 -- # set +x 00:06:55.323 17:11:03 -- nvmf/common.sh@470 -- # nvmfpid=2956635 00:06:55.323 17:11:03 -- nvmf/common.sh@471 -- # waitforlisten 2956635 00:06:55.323 17:11:03 -- common/autotest_common.sh@817 -- # '[' -z 2956635 ']' 00:06:55.323 17:11:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.323 17:11:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:55.323 17:11:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.323 17:11:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:55.323 17:11:03 -- common/autotest_common.sh@10 -- # set +x 00:06:55.323 17:11:03 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:55.323 [2024-04-24 17:11:03.975973] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:55.323 [2024-04-24 17:11:03.976018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.323 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.323 [2024-04-24 17:11:04.033669] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.323 [2024-04-24 17:11:04.113116] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.323 [2024-04-24 17:11:04.113150] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.323 [2024-04-24 17:11:04.113157] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.323 [2024-04-24 17:11:04.113162] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.323 [2024-04-24 17:11:04.113167] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.323 [2024-04-24 17:11:04.113210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.323 [2024-04-24 17:11:04.113226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.323 [2024-04-24 17:11:04.113319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.323 [2024-04-24 17:11:04.113320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.581 17:11:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:55.581 17:11:04 -- common/autotest_common.sh@850 -- # return 0 00:06:55.581 17:11:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:55.581 17:11:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:55.581 17:11:04 -- common/autotest_common.sh@10 -- # set +x 00:06:55.581 17:11:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.581 17:11:04 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:55.581 17:11:04 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:06:55.581 17:11:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:55.581 17:11:04 -- common/autotest_common.sh@10 -- # set +x 00:06:55.581 [2024-04-24 17:11:04.819660] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:55.841 [2024-04-24 17:11:04.840270] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24f1f60/0x24f6450) succeed. 00:06:55.841 [2024-04-24 17:11:04.850438] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24f3550/0x2537ae0) succeed. 00:06:55.841 17:11:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:55.841 17:11:04 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:55.841 17:11:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:55.841 17:11:04 -- common/autotest_common.sh@10 -- # set +x 00:06:55.841 Malloc1 00:06:55.841 17:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:55.841 17:11:05 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:55.841 17:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:55.841 17:11:05 -- common/autotest_common.sh@10 -- # set +x 00:06:55.841 17:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:55.841 17:11:05 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:55.841 17:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:55.841 17:11:05 -- common/autotest_common.sh@10 -- # set +x 00:06:55.841 17:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.099 17:11:05 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:56.099 17:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.099 17:11:05 -- common/autotest_common.sh@10 -- # set +x 00:06:56.099 [2024-04-24 17:11:05.095890] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:56.099 17:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.099 17:11:05 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:56.099 17:11:05 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:56.099 17:11:05 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:56.099 17:11:05 -- common/autotest_common.sh@1366 -- # local bs 00:06:56.099 17:11:05 -- common/autotest_common.sh@1367 -- # local nb 00:06:56.099 17:11:05 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:56.099 17:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.099 17:11:05 -- common/autotest_common.sh@10 -- # set +x 00:06:56.099 17:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.099 17:11:05 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:56.099 { 00:06:56.099 "name": "Malloc1", 00:06:56.099 "aliases": [ 00:06:56.099 "d623e681-4af1-46fd-b75d-b7bdd27f0189" 00:06:56.099 ], 00:06:56.099 "product_name": "Malloc disk", 00:06:56.099 "block_size": 512, 00:06:56.099 "num_blocks": 1048576, 00:06:56.099 "uuid": "d623e681-4af1-46fd-b75d-b7bdd27f0189", 00:06:56.099 "assigned_rate_limits": { 00:06:56.099 "rw_ios_per_sec": 0, 00:06:56.099 "rw_mbytes_per_sec": 0, 00:06:56.099 "r_mbytes_per_sec": 0, 00:06:56.099 "w_mbytes_per_sec": 0 00:06:56.099 }, 00:06:56.099 "claimed": true, 00:06:56.099 "claim_type": "exclusive_write", 00:06:56.099 "zoned": false, 00:06:56.099 "supported_io_types": { 00:06:56.099 "read": true, 00:06:56.099 "write": true, 00:06:56.099 "unmap": true, 00:06:56.099 "write_zeroes": true, 00:06:56.099 "flush": true, 00:06:56.099 "reset": true, 00:06:56.099 "compare": false, 00:06:56.099 "compare_and_write": false, 00:06:56.099 "abort": true, 00:06:56.099 "nvme_admin": false, 00:06:56.099 "nvme_io": false 00:06:56.099 }, 00:06:56.099 "memory_domains": [ 00:06:56.099 { 00:06:56.099 "dma_device_id": "system", 00:06:56.099 "dma_device_type": 1 00:06:56.099 }, 00:06:56.099 { 00:06:56.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.099 "dma_device_type": 2 00:06:56.099 } 00:06:56.099 ], 00:06:56.099 "driver_specific": {} 00:06:56.099 } 00:06:56.099 ]' 00:06:56.099 17:11:05 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:56.099 17:11:05 -- common/autotest_common.sh@1369 -- # bs=512 00:06:56.099 17:11:05 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:56.099 17:11:05 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:56.099 17:11:05 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:56.099 17:11:05 -- common/autotest_common.sh@1374 -- # echo 512 00:06:56.099 17:11:05 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:56.099 17:11:05 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:06:57.034 17:11:06 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:57.034 17:11:06 -- common/autotest_common.sh@1184 -- # local i=0 00:06:57.034 17:11:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:57.034 17:11:06 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:57.034 17:11:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:59.565 17:11:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:59.565 17:11:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:59.565 17:11:08 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:59.565 17:11:08 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:59.565 17:11:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:59.565 17:11:08 -- common/autotest_common.sh@1194 -- # return 0 00:06:59.565 17:11:08 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:59.565 17:11:08 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:59.565 17:11:08 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:59.565 17:11:08 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:59.565 17:11:08 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:59.565 17:11:08 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:59.565 17:11:08 -- setup/common.sh@80 -- # echo 536870912 00:06:59.565 17:11:08 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:59.565 17:11:08 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:59.565 17:11:08 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:59.565 17:11:08 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:59.565 17:11:08 -- target/filesystem.sh@69 -- # partprobe 00:06:59.565 17:11:08 -- target/filesystem.sh@70 -- # sleep 1 00:07:00.501 17:11:09 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:00.501 17:11:09 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:00.501 17:11:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:00.501 17:11:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.501 17:11:09 -- common/autotest_common.sh@10 -- # set +x 00:07:00.501 ************************************ 00:07:00.501 START TEST filesystem_ext4 00:07:00.501 ************************************ 00:07:00.501 17:11:09 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:00.501 17:11:09 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:00.501 17:11:09 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:00.501 17:11:09 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:00.501 17:11:09 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:00.501 17:11:09 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:00.501 17:11:09 -- common/autotest_common.sh@914 -- # local i=0 00:07:00.501 17:11:09 -- common/autotest_common.sh@915 -- # local force 00:07:00.501 17:11:09 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:00.501 17:11:09 -- common/autotest_common.sh@918 -- # force=-F 00:07:00.501 17:11:09 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:00.501 mke2fs 1.46.5 (30-Dec-2021) 00:07:00.501 Discarding device blocks: 0/522240 done 00:07:00.501 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:00.501 Filesystem UUID: aca43378-e1b4-4900-9a9d-2190a51b304a 00:07:00.501 Superblock backups stored on blocks: 00:07:00.501 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:00.501 00:07:00.501 Allocating group tables: 0/64 done 00:07:00.501 Writing inode tables: 0/64 done 00:07:00.501 Creating journal (8192 blocks): done 00:07:00.501 Writing superblocks and filesystem accounting information: 0/64 done 00:07:00.501 00:07:00.501 17:11:09 -- common/autotest_common.sh@931 -- # return 0 00:07:00.501 17:11:09 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:00.501 17:11:09 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:00.501 17:11:09 -- target/filesystem.sh@25 -- # sync 00:07:00.501 17:11:09 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:00.501 17:11:09 -- target/filesystem.sh@27 -- # sync 00:07:00.501 17:11:09 -- target/filesystem.sh@29 -- # i=0 00:07:00.501 17:11:09 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:00.501 17:11:09 -- target/filesystem.sh@37 -- # kill -0 2956635 00:07:00.501 17:11:09 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:00.501 17:11:09 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:00.501 17:11:09 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:00.501 17:11:09 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:00.501 00:07:00.501 real 0m0.174s 00:07:00.501 user 0m0.015s 00:07:00.501 sys 0m0.071s 00:07:00.501 17:11:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:00.501 17:11:09 -- common/autotest_common.sh@10 -- # set +x 00:07:00.501 ************************************ 00:07:00.501 END TEST filesystem_ext4 00:07:00.501 ************************************ 00:07:00.501 17:11:09 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:00.501 17:11:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:00.501 17:11:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.501 17:11:09 -- common/autotest_common.sh@10 -- # set +x 00:07:00.799 ************************************ 00:07:00.799 START TEST filesystem_btrfs 00:07:00.799 ************************************ 00:07:00.799 17:11:09 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:00.799 17:11:09 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:00.799 17:11:09 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:00.799 17:11:09 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:00.799 17:11:09 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:00.799 17:11:09 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:00.799 17:11:09 -- common/autotest_common.sh@914 -- # local i=0 00:07:00.799 17:11:09 -- common/autotest_common.sh@915 -- # local force 00:07:00.799 17:11:09 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:00.799 17:11:09 -- common/autotest_common.sh@920 -- # force=-f 00:07:00.799 17:11:09 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:00.799 btrfs-progs v6.6.2 00:07:00.799 See https://btrfs.readthedocs.io for more information. 00:07:00.799 00:07:00.799 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:00.799 NOTE: several default settings have changed in version 5.15, please make sure 00:07:00.799 this does not affect your deployments: 00:07:00.799 - DUP for metadata (-m dup) 00:07:00.799 - enabled no-holes (-O no-holes) 00:07:00.799 - enabled free-space-tree (-R free-space-tree) 00:07:00.799 00:07:00.799 Label: (null) 00:07:00.799 UUID: fd5eb249-b216-4f87-b84f-5df4ba6f0098 00:07:00.799 Node size: 16384 00:07:00.799 Sector size: 4096 00:07:00.799 Filesystem size: 510.00MiB 00:07:00.799 Block group profiles: 00:07:00.799 Data: single 8.00MiB 00:07:00.799 Metadata: DUP 32.00MiB 00:07:00.799 System: DUP 8.00MiB 00:07:00.799 SSD detected: yes 00:07:00.799 Zoned device: no 00:07:00.799 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:00.799 Runtime features: free-space-tree 00:07:00.799 Checksum: crc32c 00:07:00.799 Number of devices: 1 00:07:00.799 Devices: 00:07:00.799 ID SIZE PATH 00:07:00.799 1 510.00MiB /dev/nvme0n1p1 00:07:00.799 00:07:00.799 17:11:09 -- common/autotest_common.sh@931 -- # return 0 00:07:00.799 17:11:09 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:01.058 17:11:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:01.058 17:11:10 -- target/filesystem.sh@25 -- # sync 00:07:01.058 17:11:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:01.058 17:11:10 -- target/filesystem.sh@27 -- # sync 00:07:01.058 17:11:10 -- target/filesystem.sh@29 -- # i=0 00:07:01.058 17:11:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:01.058 17:11:10 -- target/filesystem.sh@37 -- # kill -0 2956635 00:07:01.058 17:11:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:01.058 17:11:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:01.058 17:11:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:01.058 17:11:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:01.058 00:07:01.058 real 0m0.240s 00:07:01.058 user 0m0.016s 00:07:01.058 sys 0m0.129s 00:07:01.058 17:11:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:01.058 17:11:10 -- common/autotest_common.sh@10 -- # set +x 00:07:01.058 ************************************ 00:07:01.058 END TEST filesystem_btrfs 00:07:01.058 ************************************ 00:07:01.058 17:11:10 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:01.058 17:11:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:01.058 17:11:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.058 17:11:10 -- common/autotest_common.sh@10 -- # set +x 00:07:01.058 ************************************ 00:07:01.058 START TEST filesystem_xfs 00:07:01.058 ************************************ 00:07:01.058 17:11:10 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:01.058 17:11:10 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:01.058 17:11:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:01.058 17:11:10 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:01.058 17:11:10 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:01.058 17:11:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:01.058 17:11:10 -- common/autotest_common.sh@914 -- # local i=0 00:07:01.058 17:11:10 -- common/autotest_common.sh@915 -- # local force 00:07:01.058 17:11:10 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:01.058 17:11:10 -- common/autotest_common.sh@920 -- # force=-f 00:07:01.058 17:11:10 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:01.317 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:01.317 = sectsz=512 attr=2, projid32bit=1 00:07:01.317 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:01.317 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:01.317 data = bsize=4096 blocks=130560, imaxpct=25 00:07:01.317 = sunit=0 swidth=0 blks 00:07:01.317 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:01.317 log =internal log bsize=4096 blocks=16384, version=2 00:07:01.317 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:01.317 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:01.317 Discarding blocks...Done. 00:07:01.317 17:11:10 -- common/autotest_common.sh@931 -- # return 0 00:07:01.317 17:11:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:01.317 17:11:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:01.317 17:11:10 -- target/filesystem.sh@25 -- # sync 00:07:01.317 17:11:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:01.317 17:11:10 -- target/filesystem.sh@27 -- # sync 00:07:01.317 17:11:10 -- target/filesystem.sh@29 -- # i=0 00:07:01.317 17:11:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:01.317 17:11:10 -- target/filesystem.sh@37 -- # kill -0 2956635 00:07:01.317 17:11:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:01.317 17:11:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:01.317 17:11:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:01.317 17:11:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:01.317 00:07:01.317 real 0m0.191s 00:07:01.317 user 0m0.027s 00:07:01.317 sys 0m0.064s 00:07:01.317 17:11:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:01.317 17:11:10 -- common/autotest_common.sh@10 -- # set +x 00:07:01.317 ************************************ 00:07:01.317 END TEST filesystem_xfs 00:07:01.317 ************************************ 00:07:01.318 17:11:10 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:01.318 17:11:10 -- target/filesystem.sh@93 -- # sync 00:07:01.318 17:11:10 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:02.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:02.252 17:11:11 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:02.252 17:11:11 -- common/autotest_common.sh@1205 -- # local i=0 00:07:02.252 17:11:11 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:02.252 17:11:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:02.252 17:11:11 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:02.252 17:11:11 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:02.252 17:11:11 -- common/autotest_common.sh@1217 -- # return 0 00:07:02.252 17:11:11 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:02.252 17:11:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.252 17:11:11 -- common/autotest_common.sh@10 -- # set +x 00:07:02.511 17:11:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.511 17:11:11 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:02.511 17:11:11 -- target/filesystem.sh@101 -- # killprocess 2956635 00:07:02.511 17:11:11 -- common/autotest_common.sh@936 -- # '[' -z 2956635 ']' 00:07:02.511 17:11:11 -- common/autotest_common.sh@940 -- # kill -0 2956635 00:07:02.511 17:11:11 -- common/autotest_common.sh@941 -- # uname 00:07:02.511 17:11:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.511 17:11:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2956635 00:07:02.511 17:11:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.511 17:11:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.511 17:11:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2956635' 00:07:02.511 killing process with pid 2956635 00:07:02.511 17:11:11 -- common/autotest_common.sh@955 -- # kill 2956635 00:07:02.511 17:11:11 -- common/autotest_common.sh@960 -- # wait 2956635 00:07:02.770 17:11:11 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:02.770 00:07:02.770 real 0m8.037s 00:07:02.770 user 0m31.402s 00:07:02.770 sys 0m1.166s 00:07:02.770 17:11:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:02.770 17:11:11 -- common/autotest_common.sh@10 -- # set +x 00:07:02.770 ************************************ 00:07:02.770 END TEST nvmf_filesystem_no_in_capsule 00:07:02.770 ************************************ 00:07:02.770 17:11:12 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:02.770 17:11:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:02.770 17:11:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.770 17:11:12 -- common/autotest_common.sh@10 -- # set +x 00:07:03.028 ************************************ 00:07:03.028 START TEST nvmf_filesystem_in_capsule 00:07:03.028 ************************************ 00:07:03.028 17:11:12 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:03.028 17:11:12 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:03.028 17:11:12 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:03.028 17:11:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:03.028 17:11:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:03.028 17:11:12 -- common/autotest_common.sh@10 -- # set +x 00:07:03.028 17:11:12 -- nvmf/common.sh@470 -- # nvmfpid=2957833 00:07:03.028 17:11:12 -- nvmf/common.sh@471 -- # waitforlisten 2957833 00:07:03.028 17:11:12 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:03.028 17:11:12 -- common/autotest_common.sh@817 -- # '[' -z 2957833 ']' 00:07:03.028 17:11:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.028 17:11:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:03.028 17:11:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.028 17:11:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:03.028 17:11:12 -- common/autotest_common.sh@10 -- # set +x 00:07:03.028 [2024-04-24 17:11:12.144069] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:03.028 [2024-04-24 17:11:12.144111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.028 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.028 [2024-04-24 17:11:12.200904] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.286 [2024-04-24 17:11:12.279834] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.287 [2024-04-24 17:11:12.279869] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.287 [2024-04-24 17:11:12.279876] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.287 [2024-04-24 17:11:12.279882] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.287 [2024-04-24 17:11:12.279887] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.287 [2024-04-24 17:11:12.279933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.287 [2024-04-24 17:11:12.280023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.287 [2024-04-24 17:11:12.280122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.287 [2024-04-24 17:11:12.280123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.854 17:11:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:03.854 17:11:12 -- common/autotest_common.sh@850 -- # return 0 00:07:03.854 17:11:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:03.854 17:11:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:03.854 17:11:12 -- common/autotest_common.sh@10 -- # set +x 00:07:03.854 17:11:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.854 17:11:12 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:03.854 17:11:12 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:07:03.854 17:11:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.854 17:11:12 -- common/autotest_common.sh@10 -- # set +x 00:07:03.854 [2024-04-24 17:11:13.011960] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x797f60/0x79c450) succeed. 00:07:03.854 [2024-04-24 17:11:13.022560] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x799550/0x7ddae0) succeed. 00:07:04.113 17:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.113 17:11:13 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:04.113 17:11:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.113 17:11:13 -- common/autotest_common.sh@10 -- # set +x 00:07:04.113 Malloc1 00:07:04.113 17:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.113 17:11:13 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:04.113 17:11:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.113 17:11:13 -- common/autotest_common.sh@10 -- # set +x 00:07:04.113 17:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.113 17:11:13 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:04.113 17:11:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.113 17:11:13 -- common/autotest_common.sh@10 -- # set +x 00:07:04.113 17:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.113 17:11:13 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:04.113 17:11:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.113 17:11:13 -- common/autotest_common.sh@10 -- # set +x 00:07:04.113 [2024-04-24 17:11:13.282602] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:04.113 17:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.113 17:11:13 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:04.113 17:11:13 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:04.113 17:11:13 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:04.113 17:11:13 -- common/autotest_common.sh@1366 -- # local bs 00:07:04.113 17:11:13 -- common/autotest_common.sh@1367 -- # local nb 00:07:04.113 17:11:13 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:04.113 17:11:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:04.113 17:11:13 -- common/autotest_common.sh@10 -- # set +x 00:07:04.113 17:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:04.113 17:11:13 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:04.113 { 00:07:04.113 "name": "Malloc1", 00:07:04.113 "aliases": [ 00:07:04.113 "d1a4da66-fd0f-48eb-a2e4-c62f9dd944ad" 00:07:04.113 ], 00:07:04.113 "product_name": "Malloc disk", 00:07:04.113 "block_size": 512, 00:07:04.113 "num_blocks": 1048576, 00:07:04.113 "uuid": "d1a4da66-fd0f-48eb-a2e4-c62f9dd944ad", 00:07:04.113 "assigned_rate_limits": { 00:07:04.113 "rw_ios_per_sec": 0, 00:07:04.113 "rw_mbytes_per_sec": 0, 00:07:04.113 "r_mbytes_per_sec": 0, 00:07:04.113 "w_mbytes_per_sec": 0 00:07:04.113 }, 00:07:04.113 "claimed": true, 00:07:04.113 "claim_type": "exclusive_write", 00:07:04.113 "zoned": false, 00:07:04.113 "supported_io_types": { 00:07:04.113 "read": true, 00:07:04.113 "write": true, 00:07:04.113 "unmap": true, 00:07:04.113 "write_zeroes": true, 00:07:04.113 "flush": true, 00:07:04.113 "reset": true, 00:07:04.113 "compare": false, 00:07:04.113 "compare_and_write": false, 00:07:04.113 "abort": true, 00:07:04.113 "nvme_admin": false, 00:07:04.113 "nvme_io": false 00:07:04.113 }, 00:07:04.113 "memory_domains": [ 00:07:04.113 { 00:07:04.113 "dma_device_id": "system", 00:07:04.113 "dma_device_type": 1 00:07:04.113 }, 00:07:04.113 { 00:07:04.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.114 "dma_device_type": 2 00:07:04.114 } 00:07:04.114 ], 00:07:04.114 "driver_specific": {} 00:07:04.114 } 00:07:04.114 ]' 00:07:04.114 17:11:13 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:04.114 17:11:13 -- common/autotest_common.sh@1369 -- # bs=512 00:07:04.114 17:11:13 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:04.372 17:11:13 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:04.372 17:11:13 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:04.372 17:11:13 -- common/autotest_common.sh@1374 -- # echo 512 00:07:04.372 17:11:13 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:04.372 17:11:13 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:05.308 17:11:14 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:05.308 17:11:14 -- common/autotest_common.sh@1184 -- # local i=0 00:07:05.308 17:11:14 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:05.308 17:11:14 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:05.308 17:11:14 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:07.212 17:11:16 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:07.212 17:11:16 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:07.212 17:11:16 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:07.212 17:11:16 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:07.212 17:11:16 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:07.212 17:11:16 -- common/autotest_common.sh@1194 -- # return 0 00:07:07.212 17:11:16 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:07.212 17:11:16 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:07.212 17:11:16 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:07.212 17:11:16 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:07.212 17:11:16 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:07.212 17:11:16 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:07.212 17:11:16 -- setup/common.sh@80 -- # echo 536870912 00:07:07.212 17:11:16 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:07.212 17:11:16 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:07.212 17:11:16 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:07.212 17:11:16 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:07.212 17:11:16 -- target/filesystem.sh@69 -- # partprobe 00:07:07.470 17:11:16 -- target/filesystem.sh@70 -- # sleep 1 00:07:08.407 17:11:17 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:08.407 17:11:17 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:08.407 17:11:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:08.407 17:11:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.407 17:11:17 -- common/autotest_common.sh@10 -- # set +x 00:07:08.407 ************************************ 00:07:08.407 START TEST filesystem_in_capsule_ext4 00:07:08.407 ************************************ 00:07:08.407 17:11:17 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:08.407 17:11:17 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:08.407 17:11:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:08.407 17:11:17 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:08.407 17:11:17 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:08.407 17:11:17 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:08.407 17:11:17 -- common/autotest_common.sh@914 -- # local i=0 00:07:08.407 17:11:17 -- common/autotest_common.sh@915 -- # local force 00:07:08.407 17:11:17 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:08.407 17:11:17 -- common/autotest_common.sh@918 -- # force=-F 00:07:08.407 17:11:17 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:08.407 mke2fs 1.46.5 (30-Dec-2021) 00:07:08.666 Discarding device blocks: 0/522240 done 00:07:08.666 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:08.666 Filesystem UUID: c4a8d185-e4ff-4a20-ad11-eec2a8841637 00:07:08.666 Superblock backups stored on blocks: 00:07:08.666 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:08.666 00:07:08.666 Allocating group tables: 0/64 done 00:07:08.666 Writing inode tables: 0/64 done 00:07:08.666 Creating journal (8192 blocks): done 00:07:08.666 Writing superblocks and filesystem accounting information: 0/64 done 00:07:08.666 00:07:08.666 17:11:17 -- common/autotest_common.sh@931 -- # return 0 00:07:08.666 17:11:17 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:08.666 17:11:17 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:08.666 17:11:17 -- target/filesystem.sh@25 -- # sync 00:07:08.666 17:11:17 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:08.666 17:11:17 -- target/filesystem.sh@27 -- # sync 00:07:08.666 17:11:17 -- target/filesystem.sh@29 -- # i=0 00:07:08.666 17:11:17 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:08.666 17:11:17 -- target/filesystem.sh@37 -- # kill -0 2957833 00:07:08.666 17:11:17 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:08.666 17:11:17 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:08.666 17:11:17 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:08.666 17:11:17 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:08.666 00:07:08.666 real 0m0.175s 00:07:08.666 user 0m0.025s 00:07:08.666 sys 0m0.063s 00:07:08.666 17:11:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.666 17:11:17 -- common/autotest_common.sh@10 -- # set +x 00:07:08.666 ************************************ 00:07:08.666 END TEST filesystem_in_capsule_ext4 00:07:08.666 ************************************ 00:07:08.666 17:11:17 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:08.666 17:11:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:08.666 17:11:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.666 17:11:17 -- common/autotest_common.sh@10 -- # set +x 00:07:08.925 ************************************ 00:07:08.925 START TEST filesystem_in_capsule_btrfs 00:07:08.925 ************************************ 00:07:08.925 17:11:17 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:08.925 17:11:17 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:08.925 17:11:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:08.925 17:11:17 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:08.925 17:11:17 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:08.925 17:11:17 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:08.925 17:11:17 -- common/autotest_common.sh@914 -- # local i=0 00:07:08.925 17:11:17 -- common/autotest_common.sh@915 -- # local force 00:07:08.926 17:11:17 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:08.926 17:11:17 -- common/autotest_common.sh@920 -- # force=-f 00:07:08.926 17:11:17 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:08.926 btrfs-progs v6.6.2 00:07:08.926 See https://btrfs.readthedocs.io for more information. 00:07:08.926 00:07:08.926 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:08.926 NOTE: several default settings have changed in version 5.15, please make sure 00:07:08.926 this does not affect your deployments: 00:07:08.926 - DUP for metadata (-m dup) 00:07:08.926 - enabled no-holes (-O no-holes) 00:07:08.926 - enabled free-space-tree (-R free-space-tree) 00:07:08.926 00:07:08.926 Label: (null) 00:07:08.926 UUID: e3ecedf6-6152-40d8-a917-ec574e20673c 00:07:08.926 Node size: 16384 00:07:08.926 Sector size: 4096 00:07:08.926 Filesystem size: 510.00MiB 00:07:08.926 Block group profiles: 00:07:08.926 Data: single 8.00MiB 00:07:08.926 Metadata: DUP 32.00MiB 00:07:08.926 System: DUP 8.00MiB 00:07:08.926 SSD detected: yes 00:07:08.926 Zoned device: no 00:07:08.926 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:08.926 Runtime features: free-space-tree 00:07:08.926 Checksum: crc32c 00:07:08.926 Number of devices: 1 00:07:08.926 Devices: 00:07:08.926 ID SIZE PATH 00:07:08.926 1 510.00MiB /dev/nvme0n1p1 00:07:08.926 00:07:08.926 17:11:18 -- common/autotest_common.sh@931 -- # return 0 00:07:08.926 17:11:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:08.926 17:11:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:08.926 17:11:18 -- target/filesystem.sh@25 -- # sync 00:07:08.926 17:11:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:08.926 17:11:18 -- target/filesystem.sh@27 -- # sync 00:07:08.926 17:11:18 -- target/filesystem.sh@29 -- # i=0 00:07:08.926 17:11:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:08.926 17:11:18 -- target/filesystem.sh@37 -- # kill -0 2957833 00:07:08.926 17:11:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:08.926 17:11:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:08.926 17:11:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:08.926 17:11:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:08.926 00:07:08.926 real 0m0.240s 00:07:08.926 user 0m0.026s 00:07:08.926 sys 0m0.118s 00:07:08.926 17:11:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.926 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:07:08.926 ************************************ 00:07:08.926 END TEST filesystem_in_capsule_btrfs 00:07:08.926 ************************************ 00:07:09.185 17:11:18 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:09.185 17:11:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:09.185 17:11:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.185 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:07:09.185 ************************************ 00:07:09.185 START TEST filesystem_in_capsule_xfs 00:07:09.185 ************************************ 00:07:09.185 17:11:18 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:09.185 17:11:18 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:09.185 17:11:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:09.185 17:11:18 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:09.185 17:11:18 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:09.185 17:11:18 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:09.185 17:11:18 -- common/autotest_common.sh@914 -- # local i=0 00:07:09.185 17:11:18 -- common/autotest_common.sh@915 -- # local force 00:07:09.185 17:11:18 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:09.185 17:11:18 -- common/autotest_common.sh@920 -- # force=-f 00:07:09.185 17:11:18 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:09.185 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:09.185 = sectsz=512 attr=2, projid32bit=1 00:07:09.185 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:09.185 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:09.185 data = bsize=4096 blocks=130560, imaxpct=25 00:07:09.185 = sunit=0 swidth=0 blks 00:07:09.185 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:09.185 log =internal log bsize=4096 blocks=16384, version=2 00:07:09.185 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:09.185 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:09.185 Discarding blocks...Done. 00:07:09.185 17:11:18 -- common/autotest_common.sh@931 -- # return 0 00:07:09.185 17:11:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:09.443 17:11:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:09.443 17:11:18 -- target/filesystem.sh@25 -- # sync 00:07:09.443 17:11:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:09.443 17:11:18 -- target/filesystem.sh@27 -- # sync 00:07:09.443 17:11:18 -- target/filesystem.sh@29 -- # i=0 00:07:09.443 17:11:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:09.443 17:11:18 -- target/filesystem.sh@37 -- # kill -0 2957833 00:07:09.443 17:11:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:09.443 17:11:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:09.443 17:11:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:09.443 17:11:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:09.443 00:07:09.443 real 0m0.189s 00:07:09.443 user 0m0.022s 00:07:09.443 sys 0m0.066s 00:07:09.443 17:11:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:09.443 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:07:09.443 ************************************ 00:07:09.443 END TEST filesystem_in_capsule_xfs 00:07:09.443 ************************************ 00:07:09.443 17:11:18 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:09.443 17:11:18 -- target/filesystem.sh@93 -- # sync 00:07:09.443 17:11:18 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.381 17:11:19 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.381 17:11:19 -- common/autotest_common.sh@1205 -- # local i=0 00:07:10.381 17:11:19 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:10.381 17:11:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.381 17:11:19 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:10.381 17:11:19 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.381 17:11:19 -- common/autotest_common.sh@1217 -- # return 0 00:07:10.381 17:11:19 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.381 17:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.381 17:11:19 -- common/autotest_common.sh@10 -- # set +x 00:07:10.381 17:11:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:10.381 17:11:19 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:10.381 17:11:19 -- target/filesystem.sh@101 -- # killprocess 2957833 00:07:10.381 17:11:19 -- common/autotest_common.sh@936 -- # '[' -z 2957833 ']' 00:07:10.381 17:11:19 -- common/autotest_common.sh@940 -- # kill -0 2957833 00:07:10.381 17:11:19 -- common/autotest_common.sh@941 -- # uname 00:07:10.381 17:11:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:10.381 17:11:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2957833 00:07:10.381 17:11:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:10.381 17:11:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:10.381 17:11:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2957833' 00:07:10.381 killing process with pid 2957833 00:07:10.381 17:11:19 -- common/autotest_common.sh@955 -- # kill 2957833 00:07:10.381 17:11:19 -- common/autotest_common.sh@960 -- # wait 2957833 00:07:10.948 17:11:20 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:10.948 00:07:10.948 real 0m7.926s 00:07:10.948 user 0m30.868s 00:07:10.948 sys 0m1.148s 00:07:10.948 17:11:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.948 17:11:20 -- common/autotest_common.sh@10 -- # set +x 00:07:10.948 ************************************ 00:07:10.948 END TEST nvmf_filesystem_in_capsule 00:07:10.948 ************************************ 00:07:10.948 17:11:20 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:10.948 17:11:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:10.948 17:11:20 -- nvmf/common.sh@117 -- # sync 00:07:10.948 17:11:20 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:10.948 17:11:20 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:10.948 17:11:20 -- nvmf/common.sh@120 -- # set +e 00:07:10.948 17:11:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:10.948 17:11:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:10.948 rmmod nvme_rdma 00:07:10.948 rmmod nvme_fabrics 00:07:10.948 17:11:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:10.948 17:11:20 -- nvmf/common.sh@124 -- # set -e 00:07:10.948 17:11:20 -- nvmf/common.sh@125 -- # return 0 00:07:10.948 17:11:20 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:10.948 17:11:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:10.948 17:11:20 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:10.948 00:07:10.948 real 0m21.507s 00:07:10.948 user 1m3.788s 00:07:10.948 sys 0m6.340s 00:07:10.948 17:11:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.948 17:11:20 -- common/autotest_common.sh@10 -- # set +x 00:07:10.948 ************************************ 00:07:10.948 END TEST nvmf_filesystem 00:07:10.948 ************************************ 00:07:10.948 17:11:20 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:10.948 17:11:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:10.948 17:11:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.948 17:11:20 -- common/autotest_common.sh@10 -- # set +x 00:07:11.207 ************************************ 00:07:11.207 START TEST nvmf_discovery 00:07:11.207 ************************************ 00:07:11.207 17:11:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:11.207 * Looking for test storage... 00:07:11.207 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:11.207 17:11:20 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.207 17:11:20 -- nvmf/common.sh@7 -- # uname -s 00:07:11.207 17:11:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.207 17:11:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.207 17:11:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.207 17:11:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.207 17:11:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.207 17:11:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.207 17:11:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.207 17:11:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.207 17:11:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.207 17:11:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.207 17:11:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:11.207 17:11:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:11.207 17:11:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.207 17:11:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.207 17:11:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.207 17:11:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.207 17:11:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:11.207 17:11:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.207 17:11:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.207 17:11:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.207 17:11:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.207 17:11:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.207 17:11:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.207 17:11:20 -- paths/export.sh@5 -- # export PATH 00:07:11.207 17:11:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.207 17:11:20 -- nvmf/common.sh@47 -- # : 0 00:07:11.207 17:11:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:11.207 17:11:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:11.207 17:11:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.207 17:11:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.207 17:11:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.207 17:11:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:11.207 17:11:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:11.207 17:11:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:11.207 17:11:20 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:11.207 17:11:20 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:11.207 17:11:20 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:11.207 17:11:20 -- target/discovery.sh@15 -- # hash nvme 00:07:11.207 17:11:20 -- target/discovery.sh@20 -- # nvmftestinit 00:07:11.207 17:11:20 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:11.208 17:11:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.208 17:11:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:11.208 17:11:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:11.208 17:11:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:11.208 17:11:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.208 17:11:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.208 17:11:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.208 17:11:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:11.208 17:11:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:11.208 17:11:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:11.208 17:11:20 -- common/autotest_common.sh@10 -- # set +x 00:07:16.478 17:11:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:16.478 17:11:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:16.478 17:11:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:16.478 17:11:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:16.478 17:11:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:16.478 17:11:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:16.478 17:11:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:16.478 17:11:25 -- nvmf/common.sh@295 -- # net_devs=() 00:07:16.478 17:11:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:16.478 17:11:25 -- nvmf/common.sh@296 -- # e810=() 00:07:16.478 17:11:25 -- nvmf/common.sh@296 -- # local -ga e810 00:07:16.478 17:11:25 -- nvmf/common.sh@297 -- # x722=() 00:07:16.478 17:11:25 -- nvmf/common.sh@297 -- # local -ga x722 00:07:16.478 17:11:25 -- nvmf/common.sh@298 -- # mlx=() 00:07:16.478 17:11:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:16.478 17:11:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:16.478 17:11:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:16.478 17:11:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:16.478 17:11:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:16.478 17:11:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:16.478 17:11:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:16.478 17:11:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:16.478 17:11:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:16.478 17:11:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:16.478 17:11:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:16.478 17:11:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:16.478 17:11:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:16.478 17:11:25 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:16.478 17:11:25 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:16.478 17:11:25 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:16.478 17:11:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:16.478 17:11:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.478 17:11:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:16.478 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:16.478 17:11:25 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:16.478 17:11:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.478 17:11:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:16.478 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:16.478 17:11:25 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:16.478 17:11:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:16.478 17:11:25 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.478 17:11:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.478 17:11:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:16.478 17:11:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.478 17:11:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:16.478 Found net devices under 0000:da:00.0: mlx_0_0 00:07:16.478 17:11:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.478 17:11:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.478 17:11:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.478 17:11:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:16.478 17:11:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.478 17:11:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:16.478 Found net devices under 0000:da:00.1: mlx_0_1 00:07:16.478 17:11:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.478 17:11:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:16.478 17:11:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:16.478 17:11:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:16.478 17:11:25 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:16.478 17:11:25 -- nvmf/common.sh@58 -- # uname 00:07:16.478 17:11:25 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:16.478 17:11:25 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:16.478 17:11:25 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:16.478 17:11:25 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:16.478 17:11:25 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:16.478 17:11:25 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:16.478 17:11:25 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:16.478 17:11:25 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:16.478 17:11:25 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:16.478 17:11:25 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:16.478 17:11:25 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:16.478 17:11:25 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:16.478 17:11:25 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:16.478 17:11:25 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:16.478 17:11:25 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:16.478 17:11:25 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:16.478 17:11:25 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:16.478 17:11:25 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:16.478 17:11:25 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:16.478 17:11:25 -- nvmf/common.sh@105 -- # continue 2 00:07:16.478 17:11:25 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:16.478 17:11:25 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:16.478 17:11:25 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:16.478 17:11:25 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:16.478 17:11:25 -- nvmf/common.sh@105 -- # continue 2 00:07:16.478 17:11:25 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:16.478 17:11:25 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:16.478 17:11:25 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:16.478 17:11:25 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:16.478 17:11:25 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:16.478 17:11:25 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:16.478 17:11:25 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:16.478 17:11:25 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:16.478 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:16.478 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:16.478 altname enp218s0f0np0 00:07:16.478 altname ens818f0np0 00:07:16.478 inet 192.168.100.8/24 scope global mlx_0_0 00:07:16.478 valid_lft forever preferred_lft forever 00:07:16.478 17:11:25 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:16.478 17:11:25 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:16.478 17:11:25 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:16.478 17:11:25 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:16.478 17:11:25 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:16.478 17:11:25 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:16.478 17:11:25 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:16.478 17:11:25 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:16.478 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:16.478 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:16.478 altname enp218s0f1np1 00:07:16.478 altname ens818f1np1 00:07:16.478 inet 192.168.100.9/24 scope global mlx_0_1 00:07:16.478 valid_lft forever preferred_lft forever 00:07:16.478 17:11:25 -- nvmf/common.sh@411 -- # return 0 00:07:16.478 17:11:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:16.478 17:11:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:16.478 17:11:25 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:16.478 17:11:25 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:16.478 17:11:25 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:16.478 17:11:25 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:16.478 17:11:25 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:16.478 17:11:25 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:16.478 17:11:25 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:16.478 17:11:25 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:16.478 17:11:25 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:16.478 17:11:25 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:16.478 17:11:25 -- nvmf/common.sh@105 -- # continue 2 00:07:16.478 17:11:25 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:16.478 17:11:25 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:16.478 17:11:25 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:16.478 17:11:25 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:16.478 17:11:25 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:16.738 17:11:25 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:16.738 17:11:25 -- nvmf/common.sh@105 -- # continue 2 00:07:16.738 17:11:25 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:16.738 17:11:25 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:16.738 17:11:25 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:16.738 17:11:25 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:16.738 17:11:25 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:16.738 17:11:25 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:16.738 17:11:25 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:16.738 17:11:25 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:16.738 17:11:25 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:16.738 17:11:25 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:16.738 17:11:25 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:16.738 17:11:25 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:16.738 17:11:25 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:16.738 192.168.100.9' 00:07:16.738 17:11:25 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:16.738 192.168.100.9' 00:07:16.738 17:11:25 -- nvmf/common.sh@446 -- # head -n 1 00:07:16.738 17:11:25 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:16.738 17:11:25 -- nvmf/common.sh@447 -- # tail -n +2 00:07:16.738 17:11:25 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:16.738 192.168.100.9' 00:07:16.738 17:11:25 -- nvmf/common.sh@447 -- # head -n 1 00:07:16.738 17:11:25 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:16.738 17:11:25 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:16.738 17:11:25 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:16.738 17:11:25 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:16.738 17:11:25 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:16.738 17:11:25 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:16.738 17:11:25 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:16.738 17:11:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:16.738 17:11:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:16.738 17:11:25 -- common/autotest_common.sh@10 -- # set +x 00:07:16.738 17:11:25 -- nvmf/common.sh@470 -- # nvmfpid=2960368 00:07:16.738 17:11:25 -- nvmf/common.sh@471 -- # waitforlisten 2960368 00:07:16.738 17:11:25 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:16.738 17:11:25 -- common/autotest_common.sh@817 -- # '[' -z 2960368 ']' 00:07:16.738 17:11:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.738 17:11:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:16.738 17:11:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.738 17:11:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:16.738 17:11:25 -- common/autotest_common.sh@10 -- # set +x 00:07:16.738 [2024-04-24 17:11:25.839122] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:16.738 [2024-04-24 17:11:25.839167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.738 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.738 [2024-04-24 17:11:25.894839] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.738 [2024-04-24 17:11:25.975566] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.738 [2024-04-24 17:11:25.975605] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.738 [2024-04-24 17:11:25.975612] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.738 [2024-04-24 17:11:25.975618] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.738 [2024-04-24 17:11:25.975623] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.738 [2024-04-24 17:11:25.975664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.738 [2024-04-24 17:11:25.975759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.738 [2024-04-24 17:11:25.975778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.738 [2024-04-24 17:11:25.975780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.674 17:11:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:17.674 17:11:26 -- common/autotest_common.sh@850 -- # return 0 00:07:17.674 17:11:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:17.674 17:11:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:17.674 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.674 17:11:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.674 17:11:26 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:17.674 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.674 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.674 [2024-04-24 17:11:26.692715] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd59f60/0xd5e450) succeed. 00:07:17.674 [2024-04-24 17:11:26.702913] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd5b550/0xd9fae0) succeed. 00:07:17.674 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.674 17:11:26 -- target/discovery.sh@26 -- # seq 1 4 00:07:17.674 17:11:26 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.674 17:11:26 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:17.674 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.674 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.674 Null1 00:07:17.674 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.674 17:11:26 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:17.674 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.674 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.674 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.674 17:11:26 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:17.674 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.674 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.674 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.674 17:11:26 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:17.674 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.675 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.675 [2024-04-24 17:11:26.866042] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:17.675 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.675 17:11:26 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.675 17:11:26 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:17.675 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.675 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.675 Null2 00:07:17.675 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.675 17:11:26 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:17.675 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.675 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.675 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.675 17:11:26 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:17.675 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.675 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.675 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.675 17:11:26 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:07:17.675 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.675 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.675 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.675 17:11:26 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.675 17:11:26 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:17.675 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.675 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.675 Null3 00:07:17.675 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.675 17:11:26 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:17.675 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.675 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.675 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.675 17:11:26 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:17.675 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.675 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.934 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.934 17:11:26 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:07:17.934 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.934 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.934 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.934 17:11:26 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.934 17:11:26 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:17.934 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.934 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.934 Null4 00:07:17.934 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.934 17:11:26 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:17.934 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.934 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.934 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.934 17:11:26 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:17.934 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.934 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.934 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.934 17:11:26 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:07:17.934 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.934 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.934 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.934 17:11:26 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:17.934 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.934 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.934 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.934 17:11:26 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:07:17.934 17:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.934 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.934 17:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.934 17:11:26 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:07:17.934 00:07:17.934 Discovery Log Number of Records 6, Generation counter 6 00:07:17.934 =====Discovery Log Entry 0====== 00:07:17.934 trtype: rdma 00:07:17.934 adrfam: ipv4 00:07:17.934 subtype: current discovery subsystem 00:07:17.934 treq: not required 00:07:17.934 portid: 0 00:07:17.934 trsvcid: 4420 00:07:17.934 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:17.934 traddr: 192.168.100.8 00:07:17.934 eflags: explicit discovery connections, duplicate discovery information 00:07:17.934 rdma_prtype: not specified 00:07:17.934 rdma_qptype: connected 00:07:17.934 rdma_cms: rdma-cm 00:07:17.934 rdma_pkey: 0x0000 00:07:17.934 =====Discovery Log Entry 1====== 00:07:17.934 trtype: rdma 00:07:17.934 adrfam: ipv4 00:07:17.934 subtype: nvme subsystem 00:07:17.934 treq: not required 00:07:17.934 portid: 0 00:07:17.934 trsvcid: 4420 00:07:17.934 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:17.934 traddr: 192.168.100.8 00:07:17.934 eflags: none 00:07:17.935 rdma_prtype: not specified 00:07:17.935 rdma_qptype: connected 00:07:17.935 rdma_cms: rdma-cm 00:07:17.935 rdma_pkey: 0x0000 00:07:17.935 =====Discovery Log Entry 2====== 00:07:17.935 trtype: rdma 00:07:17.935 adrfam: ipv4 00:07:17.935 subtype: nvme subsystem 00:07:17.935 treq: not required 00:07:17.935 portid: 0 00:07:17.935 trsvcid: 4420 00:07:17.935 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:17.935 traddr: 192.168.100.8 00:07:17.935 eflags: none 00:07:17.935 rdma_prtype: not specified 00:07:17.935 rdma_qptype: connected 00:07:17.935 rdma_cms: rdma-cm 00:07:17.935 rdma_pkey: 0x0000 00:07:17.935 =====Discovery Log Entry 3====== 00:07:17.935 trtype: rdma 00:07:17.935 adrfam: ipv4 00:07:17.935 subtype: nvme subsystem 00:07:17.935 treq: not required 00:07:17.935 portid: 0 00:07:17.935 trsvcid: 4420 00:07:17.935 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:17.935 traddr: 192.168.100.8 00:07:17.935 eflags: none 00:07:17.935 rdma_prtype: not specified 00:07:17.935 rdma_qptype: connected 00:07:17.935 rdma_cms: rdma-cm 00:07:17.935 rdma_pkey: 0x0000 00:07:17.935 =====Discovery Log Entry 4====== 00:07:17.935 trtype: rdma 00:07:17.935 adrfam: ipv4 00:07:17.935 subtype: nvme subsystem 00:07:17.935 treq: not required 00:07:17.935 portid: 0 00:07:17.935 trsvcid: 4420 00:07:17.935 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:17.935 traddr: 192.168.100.8 00:07:17.935 eflags: none 00:07:17.935 rdma_prtype: not specified 00:07:17.935 rdma_qptype: connected 00:07:17.935 rdma_cms: rdma-cm 00:07:17.935 rdma_pkey: 0x0000 00:07:17.935 =====Discovery Log Entry 5====== 00:07:17.935 trtype: rdma 00:07:17.935 adrfam: ipv4 00:07:17.935 subtype: discovery subsystem referral 00:07:17.935 treq: not required 00:07:17.935 portid: 0 00:07:17.935 trsvcid: 4430 00:07:17.935 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:17.935 traddr: 192.168.100.8 00:07:17.935 eflags: none 00:07:17.935 rdma_prtype: unrecognized 00:07:17.935 rdma_qptype: unrecognized 00:07:17.935 rdma_cms: unrecognized 00:07:17.935 rdma_pkey: 0x0000 00:07:17.935 17:11:27 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:17.935 Perform nvmf subsystem discovery via RPC 00:07:17.935 17:11:27 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:17.935 17:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.935 17:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.935 [2024-04-24 17:11:27.066439] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:17.935 [ 00:07:17.935 { 00:07:17.935 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:17.935 "subtype": "Discovery", 00:07:17.935 "listen_addresses": [ 00:07:17.935 { 00:07:17.935 "transport": "RDMA", 00:07:17.935 "trtype": "RDMA", 00:07:17.935 "adrfam": "IPv4", 00:07:17.935 "traddr": "192.168.100.8", 00:07:17.935 "trsvcid": "4420" 00:07:17.935 } 00:07:17.935 ], 00:07:17.935 "allow_any_host": true, 00:07:17.935 "hosts": [] 00:07:17.935 }, 00:07:17.935 { 00:07:17.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:17.935 "subtype": "NVMe", 00:07:17.935 "listen_addresses": [ 00:07:17.935 { 00:07:17.935 "transport": "RDMA", 00:07:17.935 "trtype": "RDMA", 00:07:17.935 "adrfam": "IPv4", 00:07:17.935 "traddr": "192.168.100.8", 00:07:17.935 "trsvcid": "4420" 00:07:17.935 } 00:07:17.935 ], 00:07:17.935 "allow_any_host": true, 00:07:17.935 "hosts": [], 00:07:17.935 "serial_number": "SPDK00000000000001", 00:07:17.935 "model_number": "SPDK bdev Controller", 00:07:17.935 "max_namespaces": 32, 00:07:17.935 "min_cntlid": 1, 00:07:17.935 "max_cntlid": 65519, 00:07:17.935 "namespaces": [ 00:07:17.935 { 00:07:17.935 "nsid": 1, 00:07:17.935 "bdev_name": "Null1", 00:07:17.935 "name": "Null1", 00:07:17.935 "nguid": "7CD6F59F03E643348BDE1D14B49481C1", 00:07:17.935 "uuid": "7cd6f59f-03e6-4334-8bde-1d14b49481c1" 00:07:17.935 } 00:07:17.935 ] 00:07:17.935 }, 00:07:17.935 { 00:07:17.935 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:17.935 "subtype": "NVMe", 00:07:17.935 "listen_addresses": [ 00:07:17.935 { 00:07:17.935 "transport": "RDMA", 00:07:17.935 "trtype": "RDMA", 00:07:17.935 "adrfam": "IPv4", 00:07:17.935 "traddr": "192.168.100.8", 00:07:17.935 "trsvcid": "4420" 00:07:17.935 } 00:07:17.935 ], 00:07:17.935 "allow_any_host": true, 00:07:17.935 "hosts": [], 00:07:17.935 "serial_number": "SPDK00000000000002", 00:07:17.935 "model_number": "SPDK bdev Controller", 00:07:17.935 "max_namespaces": 32, 00:07:17.935 "min_cntlid": 1, 00:07:17.935 "max_cntlid": 65519, 00:07:17.935 "namespaces": [ 00:07:17.935 { 00:07:17.935 "nsid": 1, 00:07:17.935 "bdev_name": "Null2", 00:07:17.935 "name": "Null2", 00:07:17.935 "nguid": "142637FA7DA440248E83180F4A60ACD0", 00:07:17.935 "uuid": "142637fa-7da4-4024-8e83-180f4a60acd0" 00:07:17.935 } 00:07:17.935 ] 00:07:17.935 }, 00:07:17.935 { 00:07:17.935 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:17.935 "subtype": "NVMe", 00:07:17.935 "listen_addresses": [ 00:07:17.935 { 00:07:17.935 "transport": "RDMA", 00:07:17.935 "trtype": "RDMA", 00:07:17.935 "adrfam": "IPv4", 00:07:17.935 "traddr": "192.168.100.8", 00:07:17.935 "trsvcid": "4420" 00:07:17.935 } 00:07:17.935 ], 00:07:17.935 "allow_any_host": true, 00:07:17.935 "hosts": [], 00:07:17.935 "serial_number": "SPDK00000000000003", 00:07:17.935 "model_number": "SPDK bdev Controller", 00:07:17.935 "max_namespaces": 32, 00:07:17.935 "min_cntlid": 1, 00:07:17.935 "max_cntlid": 65519, 00:07:17.935 "namespaces": [ 00:07:17.935 { 00:07:17.935 "nsid": 1, 00:07:17.935 "bdev_name": "Null3", 00:07:17.935 "name": "Null3", 00:07:17.935 "nguid": "F54189F5AF694F03A4BBFBF7103F9BBF", 00:07:17.935 "uuid": "f54189f5-af69-4f03-a4bb-fbf7103f9bbf" 00:07:17.935 } 00:07:17.935 ] 00:07:17.935 }, 00:07:17.935 { 00:07:17.935 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:17.935 "subtype": "NVMe", 00:07:17.935 "listen_addresses": [ 00:07:17.935 { 00:07:17.935 "transport": "RDMA", 00:07:17.935 "trtype": "RDMA", 00:07:17.935 "adrfam": "IPv4", 00:07:17.935 "traddr": "192.168.100.8", 00:07:17.935 "trsvcid": "4420" 00:07:17.935 } 00:07:17.935 ], 00:07:17.935 "allow_any_host": true, 00:07:17.935 "hosts": [], 00:07:17.935 "serial_number": "SPDK00000000000004", 00:07:17.935 "model_number": "SPDK bdev Controller", 00:07:17.935 "max_namespaces": 32, 00:07:17.935 "min_cntlid": 1, 00:07:17.935 "max_cntlid": 65519, 00:07:17.935 "namespaces": [ 00:07:17.935 { 00:07:17.935 "nsid": 1, 00:07:17.935 "bdev_name": "Null4", 00:07:17.935 "name": "Null4", 00:07:17.935 "nguid": "8655EF07356E40E1BEDC5FC69260564B", 00:07:17.935 "uuid": "8655ef07-356e-40e1-bedc-5fc69260564b" 00:07:17.935 } 00:07:17.935 ] 00:07:17.935 } 00:07:17.935 ] 00:07:17.935 17:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.935 17:11:27 -- target/discovery.sh@42 -- # seq 1 4 00:07:17.935 17:11:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.935 17:11:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.935 17:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.935 17:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.935 17:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.935 17:11:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:17.935 17:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.935 17:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.935 17:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.935 17:11:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.935 17:11:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:17.935 17:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.935 17:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.935 17:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.935 17:11:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:17.935 17:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.935 17:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.935 17:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.935 17:11:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.935 17:11:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:17.935 17:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.935 17:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.935 17:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.935 17:11:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:17.935 17:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.935 17:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.935 17:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.935 17:11:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.935 17:11:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:17.935 17:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.935 17:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.935 17:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.935 17:11:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:17.935 17:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.935 17:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.935 17:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.935 17:11:27 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:07:17.935 17:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.935 17:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.935 17:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.935 17:11:27 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:17.936 17:11:27 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:17.936 17:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.936 17:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.936 17:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.195 17:11:27 -- target/discovery.sh@49 -- # check_bdevs= 00:07:18.195 17:11:27 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:18.195 17:11:27 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:18.195 17:11:27 -- target/discovery.sh@57 -- # nvmftestfini 00:07:18.195 17:11:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:18.195 17:11:27 -- nvmf/common.sh@117 -- # sync 00:07:18.195 17:11:27 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:18.195 17:11:27 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:18.195 17:11:27 -- nvmf/common.sh@120 -- # set +e 00:07:18.195 17:11:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:18.195 17:11:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:18.195 rmmod nvme_rdma 00:07:18.195 rmmod nvme_fabrics 00:07:18.195 17:11:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:18.195 17:11:27 -- nvmf/common.sh@124 -- # set -e 00:07:18.195 17:11:27 -- nvmf/common.sh@125 -- # return 0 00:07:18.195 17:11:27 -- nvmf/common.sh@478 -- # '[' -n 2960368 ']' 00:07:18.195 17:11:27 -- nvmf/common.sh@479 -- # killprocess 2960368 00:07:18.195 17:11:27 -- common/autotest_common.sh@936 -- # '[' -z 2960368 ']' 00:07:18.195 17:11:27 -- common/autotest_common.sh@940 -- # kill -0 2960368 00:07:18.195 17:11:27 -- common/autotest_common.sh@941 -- # uname 00:07:18.195 17:11:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:18.195 17:11:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2960368 00:07:18.195 17:11:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:18.195 17:11:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:18.195 17:11:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2960368' 00:07:18.195 killing process with pid 2960368 00:07:18.195 17:11:27 -- common/autotest_common.sh@955 -- # kill 2960368 00:07:18.195 [2024-04-24 17:11:27.288086] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:18.195 17:11:27 -- common/autotest_common.sh@960 -- # wait 2960368 00:07:18.453 17:11:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:18.453 17:11:27 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:18.453 00:07:18.453 real 0m7.352s 00:07:18.453 user 0m7.911s 00:07:18.453 sys 0m4.465s 00:07:18.453 17:11:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:18.453 17:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:18.453 ************************************ 00:07:18.453 END TEST nvmf_discovery 00:07:18.453 ************************************ 00:07:18.453 17:11:27 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:07:18.453 17:11:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:18.453 17:11:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.453 17:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:18.713 ************************************ 00:07:18.713 START TEST nvmf_referrals 00:07:18.713 ************************************ 00:07:18.713 17:11:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:07:18.713 * Looking for test storage... 00:07:18.713 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:18.713 17:11:27 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.713 17:11:27 -- nvmf/common.sh@7 -- # uname -s 00:07:18.713 17:11:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.713 17:11:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.713 17:11:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.713 17:11:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.713 17:11:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.713 17:11:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.713 17:11:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.713 17:11:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.713 17:11:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.713 17:11:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.713 17:11:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:18.713 17:11:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:18.713 17:11:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.713 17:11:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.713 17:11:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.713 17:11:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.713 17:11:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:18.713 17:11:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.713 17:11:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.713 17:11:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.713 17:11:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.713 17:11:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.713 17:11:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.713 17:11:27 -- paths/export.sh@5 -- # export PATH 00:07:18.714 17:11:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.714 17:11:27 -- nvmf/common.sh@47 -- # : 0 00:07:18.714 17:11:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:18.714 17:11:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:18.714 17:11:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.714 17:11:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.714 17:11:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.714 17:11:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:18.714 17:11:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:18.714 17:11:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:18.714 17:11:27 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:18.714 17:11:27 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:18.714 17:11:27 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:18.714 17:11:27 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:18.714 17:11:27 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:18.714 17:11:27 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:18.714 17:11:27 -- target/referrals.sh@37 -- # nvmftestinit 00:07:18.714 17:11:27 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:18.714 17:11:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.714 17:11:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:18.714 17:11:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:18.714 17:11:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:18.714 17:11:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.714 17:11:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:18.714 17:11:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.714 17:11:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:18.714 17:11:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:18.714 17:11:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:18.714 17:11:27 -- common/autotest_common.sh@10 -- # set +x 00:07:23.988 17:11:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:23.988 17:11:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:23.988 17:11:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:23.988 17:11:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:23.988 17:11:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:23.988 17:11:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:23.988 17:11:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:23.988 17:11:32 -- nvmf/common.sh@295 -- # net_devs=() 00:07:23.988 17:11:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:23.988 17:11:33 -- nvmf/common.sh@296 -- # e810=() 00:07:23.988 17:11:33 -- nvmf/common.sh@296 -- # local -ga e810 00:07:23.988 17:11:33 -- nvmf/common.sh@297 -- # x722=() 00:07:23.988 17:11:33 -- nvmf/common.sh@297 -- # local -ga x722 00:07:23.988 17:11:33 -- nvmf/common.sh@298 -- # mlx=() 00:07:23.988 17:11:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:23.988 17:11:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.988 17:11:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.988 17:11:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.988 17:11:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.988 17:11:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.988 17:11:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.988 17:11:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.988 17:11:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.988 17:11:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.988 17:11:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.988 17:11:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.988 17:11:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:23.988 17:11:33 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:23.988 17:11:33 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:23.988 17:11:33 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:23.988 17:11:33 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:23.988 17:11:33 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:23.988 17:11:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:23.988 17:11:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.988 17:11:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:23.988 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:23.988 17:11:33 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:23.988 17:11:33 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:23.988 17:11:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:23.988 17:11:33 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:23.988 17:11:33 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:23.989 17:11:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.989 17:11:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:23.989 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:23.989 17:11:33 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:23.989 17:11:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:23.989 17:11:33 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.989 17:11:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.989 17:11:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:23.989 17:11:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.989 17:11:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:23.989 Found net devices under 0000:da:00.0: mlx_0_0 00:07:23.989 17:11:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.989 17:11:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.989 17:11:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.989 17:11:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:23.989 17:11:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.989 17:11:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:23.989 Found net devices under 0000:da:00.1: mlx_0_1 00:07:23.989 17:11:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.989 17:11:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:23.989 17:11:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:23.989 17:11:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:23.989 17:11:33 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:23.989 17:11:33 -- nvmf/common.sh@58 -- # uname 00:07:23.989 17:11:33 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:23.989 17:11:33 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:23.989 17:11:33 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:23.989 17:11:33 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:23.989 17:11:33 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:23.989 17:11:33 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:23.989 17:11:33 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:23.989 17:11:33 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:23.989 17:11:33 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:23.989 17:11:33 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:23.989 17:11:33 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:23.989 17:11:33 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:23.989 17:11:33 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:23.989 17:11:33 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:23.989 17:11:33 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:23.989 17:11:33 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:23.989 17:11:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:23.989 17:11:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:23.989 17:11:33 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:23.989 17:11:33 -- nvmf/common.sh@105 -- # continue 2 00:07:23.989 17:11:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:23.989 17:11:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:23.989 17:11:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:23.989 17:11:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:23.989 17:11:33 -- nvmf/common.sh@105 -- # continue 2 00:07:23.989 17:11:33 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:23.989 17:11:33 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:23.989 17:11:33 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:23.989 17:11:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:23.989 17:11:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:23.989 17:11:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:23.989 17:11:33 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:23.989 17:11:33 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:23.989 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:23.989 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:23.989 altname enp218s0f0np0 00:07:23.989 altname ens818f0np0 00:07:23.989 inet 192.168.100.8/24 scope global mlx_0_0 00:07:23.989 valid_lft forever preferred_lft forever 00:07:23.989 17:11:33 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:23.989 17:11:33 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:23.989 17:11:33 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:23.989 17:11:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:23.989 17:11:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:23.989 17:11:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:23.989 17:11:33 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:23.989 17:11:33 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:23.989 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:23.989 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:23.989 altname enp218s0f1np1 00:07:23.989 altname ens818f1np1 00:07:23.989 inet 192.168.100.9/24 scope global mlx_0_1 00:07:23.989 valid_lft forever preferred_lft forever 00:07:23.989 17:11:33 -- nvmf/common.sh@411 -- # return 0 00:07:23.989 17:11:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:23.989 17:11:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:23.989 17:11:33 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:23.989 17:11:33 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:23.989 17:11:33 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:23.989 17:11:33 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:23.989 17:11:33 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:23.989 17:11:33 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:23.989 17:11:33 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:23.989 17:11:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:23.989 17:11:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:23.989 17:11:33 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:23.989 17:11:33 -- nvmf/common.sh@105 -- # continue 2 00:07:23.989 17:11:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:23.989 17:11:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:23.989 17:11:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:23.989 17:11:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:23.989 17:11:33 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:23.989 17:11:33 -- nvmf/common.sh@105 -- # continue 2 00:07:23.989 17:11:33 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:23.989 17:11:33 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:23.989 17:11:33 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:23.989 17:11:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:23.989 17:11:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:23.989 17:11:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:23.989 17:11:33 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:23.989 17:11:33 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:23.989 17:11:33 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:23.989 17:11:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:23.989 17:11:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:23.989 17:11:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:23.989 17:11:33 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:23.989 192.168.100.9' 00:07:23.989 17:11:33 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:23.989 192.168.100.9' 00:07:23.989 17:11:33 -- nvmf/common.sh@446 -- # head -n 1 00:07:23.989 17:11:33 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:23.989 17:11:33 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:23.989 192.168.100.9' 00:07:23.989 17:11:33 -- nvmf/common.sh@447 -- # tail -n +2 00:07:23.989 17:11:33 -- nvmf/common.sh@447 -- # head -n 1 00:07:23.989 17:11:33 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:23.989 17:11:33 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:23.989 17:11:33 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:23.989 17:11:33 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:23.989 17:11:33 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:23.989 17:11:33 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:23.989 17:11:33 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:23.989 17:11:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:23.989 17:11:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:23.989 17:11:33 -- common/autotest_common.sh@10 -- # set +x 00:07:23.989 17:11:33 -- nvmf/common.sh@470 -- # nvmfpid=2962643 00:07:23.989 17:11:33 -- nvmf/common.sh@471 -- # waitforlisten 2962643 00:07:23.989 17:11:33 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:23.989 17:11:33 -- common/autotest_common.sh@817 -- # '[' -z 2962643 ']' 00:07:23.989 17:11:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.989 17:11:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:23.989 17:11:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.989 17:11:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:23.989 17:11:33 -- common/autotest_common.sh@10 -- # set +x 00:07:24.249 [2024-04-24 17:11:33.264871] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:24.249 [2024-04-24 17:11:33.264922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.249 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.249 [2024-04-24 17:11:33.321338] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.249 [2024-04-24 17:11:33.395875] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.249 [2024-04-24 17:11:33.395918] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.249 [2024-04-24 17:11:33.395924] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.249 [2024-04-24 17:11:33.395930] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.249 [2024-04-24 17:11:33.395935] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.249 [2024-04-24 17:11:33.396033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.249 [2024-04-24 17:11:33.396149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.249 [2024-04-24 17:11:33.396218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.249 [2024-04-24 17:11:33.396219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.816 17:11:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:24.816 17:11:34 -- common/autotest_common.sh@850 -- # return 0 00:07:24.816 17:11:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:24.816 17:11:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:25.075 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.075 17:11:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.075 17:11:34 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:25.075 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.075 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.075 [2024-04-24 17:11:34.126534] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcdaf60/0xcdf450) succeed. 00:07:25.075 [2024-04-24 17:11:34.136907] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcdc550/0xd20ae0) succeed. 00:07:25.075 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.075 17:11:34 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:07:25.075 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.075 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.075 [2024-04-24 17:11:34.257729] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:07:25.075 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.075 17:11:34 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:07:25.075 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.075 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.075 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.075 17:11:34 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:07:25.075 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.075 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.075 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.075 17:11:34 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:07:25.075 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.075 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.075 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.075 17:11:34 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:25.075 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.075 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.075 17:11:34 -- target/referrals.sh@48 -- # jq length 00:07:25.075 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.334 17:11:34 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:25.334 17:11:34 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:25.334 17:11:34 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:25.334 17:11:34 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:25.334 17:11:34 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:25.334 17:11:34 -- target/referrals.sh@21 -- # sort 00:07:25.334 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.334 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.334 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.334 17:11:34 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:25.334 17:11:34 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:25.334 17:11:34 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:25.334 17:11:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:25.334 17:11:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:25.334 17:11:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:25.334 17:11:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:25.334 17:11:34 -- target/referrals.sh@26 -- # sort 00:07:25.334 17:11:34 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:25.334 17:11:34 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:25.334 17:11:34 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:07:25.334 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.334 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.334 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.334 17:11:34 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:07:25.334 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.334 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.334 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.334 17:11:34 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:07:25.334 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.334 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.334 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.334 17:11:34 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:25.334 17:11:34 -- target/referrals.sh@56 -- # jq length 00:07:25.334 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.334 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.334 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.334 17:11:34 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:25.334 17:11:34 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:25.334 17:11:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:25.334 17:11:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:25.334 17:11:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:25.334 17:11:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:25.334 17:11:34 -- target/referrals.sh@26 -- # sort 00:07:25.647 17:11:34 -- target/referrals.sh@26 -- # echo 00:07:25.647 17:11:34 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:25.647 17:11:34 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:07:25.647 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.647 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.647 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.647 17:11:34 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:25.647 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.647 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.647 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.647 17:11:34 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:25.647 17:11:34 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:25.647 17:11:34 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:25.647 17:11:34 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:25.647 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.647 17:11:34 -- target/referrals.sh@21 -- # sort 00:07:25.647 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.647 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.647 17:11:34 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:25.647 17:11:34 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:25.647 17:11:34 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:25.647 17:11:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:25.647 17:11:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:25.647 17:11:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:25.647 17:11:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:25.647 17:11:34 -- target/referrals.sh@26 -- # sort 00:07:25.647 17:11:34 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:25.647 17:11:34 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:25.647 17:11:34 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:25.648 17:11:34 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:25.648 17:11:34 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:25.648 17:11:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:25.648 17:11:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:25.907 17:11:34 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:25.907 17:11:34 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:25.907 17:11:34 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:25.907 17:11:34 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:25.907 17:11:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:25.907 17:11:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:25.907 17:11:34 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:25.907 17:11:34 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:25.907 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.907 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.907 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.907 17:11:34 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:25.907 17:11:34 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:25.907 17:11:34 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:25.907 17:11:34 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:25.907 17:11:34 -- target/referrals.sh@21 -- # sort 00:07:25.907 17:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.907 17:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.907 17:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.907 17:11:35 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:25.907 17:11:35 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:25.907 17:11:35 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:25.907 17:11:35 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:25.907 17:11:35 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:25.907 17:11:35 -- target/referrals.sh@26 -- # sort 00:07:25.907 17:11:35 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:25.907 17:11:35 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:25.907 17:11:35 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:25.907 17:11:35 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:25.907 17:11:35 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:25.907 17:11:35 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:25.907 17:11:35 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:25.907 17:11:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:25.907 17:11:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:26.165 17:11:35 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:26.165 17:11:35 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:26.165 17:11:35 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:26.165 17:11:35 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:26.165 17:11:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:26.165 17:11:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:26.165 17:11:35 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:26.165 17:11:35 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:26.165 17:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.165 17:11:35 -- common/autotest_common.sh@10 -- # set +x 00:07:26.165 17:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.165 17:11:35 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:26.165 17:11:35 -- target/referrals.sh@82 -- # jq length 00:07:26.165 17:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.165 17:11:35 -- common/autotest_common.sh@10 -- # set +x 00:07:26.165 17:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.165 17:11:35 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:26.165 17:11:35 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:26.165 17:11:35 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:26.165 17:11:35 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:26.165 17:11:35 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:26.165 17:11:35 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:26.165 17:11:35 -- target/referrals.sh@26 -- # sort 00:07:26.424 17:11:35 -- target/referrals.sh@26 -- # echo 00:07:26.424 17:11:35 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:26.424 17:11:35 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:26.424 17:11:35 -- target/referrals.sh@86 -- # nvmftestfini 00:07:26.424 17:11:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:26.424 17:11:35 -- nvmf/common.sh@117 -- # sync 00:07:26.424 17:11:35 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:26.424 17:11:35 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:26.424 17:11:35 -- nvmf/common.sh@120 -- # set +e 00:07:26.424 17:11:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:26.424 17:11:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:26.424 rmmod nvme_rdma 00:07:26.424 rmmod nvme_fabrics 00:07:26.424 17:11:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:26.424 17:11:35 -- nvmf/common.sh@124 -- # set -e 00:07:26.424 17:11:35 -- nvmf/common.sh@125 -- # return 0 00:07:26.424 17:11:35 -- nvmf/common.sh@478 -- # '[' -n 2962643 ']' 00:07:26.424 17:11:35 -- nvmf/common.sh@479 -- # killprocess 2962643 00:07:26.424 17:11:35 -- common/autotest_common.sh@936 -- # '[' -z 2962643 ']' 00:07:26.424 17:11:35 -- common/autotest_common.sh@940 -- # kill -0 2962643 00:07:26.424 17:11:35 -- common/autotest_common.sh@941 -- # uname 00:07:26.424 17:11:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:26.424 17:11:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2962643 00:07:26.424 17:11:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:26.424 17:11:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:26.424 17:11:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2962643' 00:07:26.424 killing process with pid 2962643 00:07:26.424 17:11:35 -- common/autotest_common.sh@955 -- # kill 2962643 00:07:26.424 17:11:35 -- common/autotest_common.sh@960 -- # wait 2962643 00:07:26.684 17:11:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:26.684 17:11:35 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:26.684 00:07:26.684 real 0m8.086s 00:07:26.684 user 0m11.717s 00:07:26.684 sys 0m4.784s 00:07:26.684 17:11:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:26.684 17:11:35 -- common/autotest_common.sh@10 -- # set +x 00:07:26.684 ************************************ 00:07:26.684 END TEST nvmf_referrals 00:07:26.684 ************************************ 00:07:26.684 17:11:35 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:07:26.684 17:11:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:26.684 17:11:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.684 17:11:35 -- common/autotest_common.sh@10 -- # set +x 00:07:26.943 ************************************ 00:07:26.943 START TEST nvmf_connect_disconnect 00:07:26.943 ************************************ 00:07:26.943 17:11:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:07:26.943 * Looking for test storage... 00:07:26.943 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:26.943 17:11:36 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.943 17:11:36 -- nvmf/common.sh@7 -- # uname -s 00:07:26.943 17:11:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.943 17:11:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.943 17:11:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.943 17:11:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.943 17:11:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.943 17:11:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.943 17:11:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.943 17:11:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.943 17:11:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.943 17:11:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.943 17:11:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:26.943 17:11:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:26.943 17:11:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.943 17:11:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.943 17:11:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.943 17:11:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.943 17:11:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:26.943 17:11:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.943 17:11:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.943 17:11:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.943 17:11:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.943 17:11:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.943 17:11:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.943 17:11:36 -- paths/export.sh@5 -- # export PATH 00:07:26.943 17:11:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.943 17:11:36 -- nvmf/common.sh@47 -- # : 0 00:07:26.943 17:11:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.943 17:11:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.943 17:11:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.943 17:11:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.943 17:11:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.943 17:11:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.943 17:11:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.943 17:11:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.943 17:11:36 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:26.943 17:11:36 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:26.943 17:11:36 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:26.943 17:11:36 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:26.943 17:11:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.943 17:11:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:26.943 17:11:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:26.943 17:11:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:26.943 17:11:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.943 17:11:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.943 17:11:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.943 17:11:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:26.943 17:11:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:26.943 17:11:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:26.943 17:11:36 -- common/autotest_common.sh@10 -- # set +x 00:07:32.220 17:11:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:32.220 17:11:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:32.220 17:11:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:32.220 17:11:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:32.220 17:11:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:32.220 17:11:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:32.220 17:11:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:32.220 17:11:41 -- nvmf/common.sh@295 -- # net_devs=() 00:07:32.220 17:11:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:32.220 17:11:41 -- nvmf/common.sh@296 -- # e810=() 00:07:32.220 17:11:41 -- nvmf/common.sh@296 -- # local -ga e810 00:07:32.220 17:11:41 -- nvmf/common.sh@297 -- # x722=() 00:07:32.220 17:11:41 -- nvmf/common.sh@297 -- # local -ga x722 00:07:32.220 17:11:41 -- nvmf/common.sh@298 -- # mlx=() 00:07:32.220 17:11:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:32.220 17:11:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.220 17:11:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.220 17:11:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.220 17:11:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.220 17:11:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.220 17:11:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.220 17:11:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.220 17:11:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.220 17:11:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.220 17:11:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.220 17:11:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.220 17:11:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:32.220 17:11:41 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:32.220 17:11:41 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:32.220 17:11:41 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:32.220 17:11:41 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:32.220 17:11:41 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:32.220 17:11:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:32.220 17:11:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.220 17:11:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:32.220 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:32.220 17:11:41 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:32.220 17:11:41 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:32.220 17:11:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:32.220 17:11:41 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:32.220 17:11:41 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:32.220 17:11:41 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:32.220 17:11:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.220 17:11:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:32.220 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:32.220 17:11:41 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:32.220 17:11:41 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:32.220 17:11:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:32.220 17:11:41 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:32.220 17:11:41 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:32.220 17:11:41 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:32.220 17:11:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:32.220 17:11:41 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:32.220 17:11:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.220 17:11:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.220 17:11:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:32.220 17:11:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.220 17:11:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:32.220 Found net devices under 0000:da:00.0: mlx_0_0 00:07:32.220 17:11:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.220 17:11:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.220 17:11:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.220 17:11:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:32.220 17:11:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.220 17:11:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:32.220 Found net devices under 0000:da:00.1: mlx_0_1 00:07:32.220 17:11:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.220 17:11:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:32.221 17:11:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:32.221 17:11:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:32.221 17:11:41 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:32.221 17:11:41 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:32.221 17:11:41 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:32.221 17:11:41 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:32.221 17:11:41 -- nvmf/common.sh@58 -- # uname 00:07:32.221 17:11:41 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:32.221 17:11:41 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:32.221 17:11:41 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:32.221 17:11:41 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:32.221 17:11:41 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:32.221 17:11:41 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:32.221 17:11:41 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:32.221 17:11:41 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:32.221 17:11:41 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:32.221 17:11:41 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:32.221 17:11:41 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:32.221 17:11:41 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:32.221 17:11:41 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:32.221 17:11:41 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:32.221 17:11:41 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:32.221 17:11:41 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:32.221 17:11:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:32.221 17:11:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.221 17:11:41 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:32.221 17:11:41 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:32.221 17:11:41 -- nvmf/common.sh@105 -- # continue 2 00:07:32.221 17:11:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:32.221 17:11:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.221 17:11:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:32.221 17:11:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.221 17:11:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:32.221 17:11:41 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:32.221 17:11:41 -- nvmf/common.sh@105 -- # continue 2 00:07:32.221 17:11:41 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:32.221 17:11:41 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:32.221 17:11:41 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:32.221 17:11:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:32.221 17:11:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:32.221 17:11:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:32.221 17:11:41 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:32.221 17:11:41 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:32.221 17:11:41 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:32.221 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:32.221 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:32.221 altname enp218s0f0np0 00:07:32.221 altname ens818f0np0 00:07:32.221 inet 192.168.100.8/24 scope global mlx_0_0 00:07:32.221 valid_lft forever preferred_lft forever 00:07:32.221 17:11:41 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:32.221 17:11:41 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:32.221 17:11:41 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:32.221 17:11:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:32.221 17:11:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:32.221 17:11:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:32.221 17:11:41 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:32.221 17:11:41 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:32.221 17:11:41 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:32.221 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:32.221 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:32.221 altname enp218s0f1np1 00:07:32.221 altname ens818f1np1 00:07:32.221 inet 192.168.100.9/24 scope global mlx_0_1 00:07:32.221 valid_lft forever preferred_lft forever 00:07:32.221 17:11:41 -- nvmf/common.sh@411 -- # return 0 00:07:32.221 17:11:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:32.221 17:11:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:32.221 17:11:41 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:32.221 17:11:41 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:32.221 17:11:41 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:32.221 17:11:41 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:32.221 17:11:41 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:32.221 17:11:41 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:32.221 17:11:41 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:32.221 17:11:41 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:32.221 17:11:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:32.221 17:11:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.221 17:11:41 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:32.221 17:11:41 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:32.221 17:11:41 -- nvmf/common.sh@105 -- # continue 2 00:07:32.221 17:11:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:32.221 17:11:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.221 17:11:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:32.221 17:11:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.221 17:11:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:32.221 17:11:41 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:32.221 17:11:41 -- nvmf/common.sh@105 -- # continue 2 00:07:32.221 17:11:41 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:32.221 17:11:41 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:32.221 17:11:41 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:32.221 17:11:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:32.221 17:11:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:32.221 17:11:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:32.221 17:11:41 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:32.221 17:11:41 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:32.221 17:11:41 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:32.221 17:11:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:32.221 17:11:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:32.221 17:11:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:32.221 17:11:41 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:32.221 192.168.100.9' 00:07:32.221 17:11:41 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:32.221 192.168.100.9' 00:07:32.221 17:11:41 -- nvmf/common.sh@446 -- # head -n 1 00:07:32.221 17:11:41 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:32.221 17:11:41 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:32.221 192.168.100.9' 00:07:32.221 17:11:41 -- nvmf/common.sh@447 -- # tail -n +2 00:07:32.221 17:11:41 -- nvmf/common.sh@447 -- # head -n 1 00:07:32.221 17:11:41 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:32.221 17:11:41 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:32.221 17:11:41 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:32.221 17:11:41 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:32.221 17:11:41 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:32.221 17:11:41 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:32.221 17:11:41 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:32.221 17:11:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:32.221 17:11:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:32.221 17:11:41 -- common/autotest_common.sh@10 -- # set +x 00:07:32.221 17:11:41 -- nvmf/common.sh@470 -- # nvmfpid=2964998 00:07:32.221 17:11:41 -- nvmf/common.sh@471 -- # waitforlisten 2964998 00:07:32.221 17:11:41 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:32.221 17:11:41 -- common/autotest_common.sh@817 -- # '[' -z 2964998 ']' 00:07:32.221 17:11:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.221 17:11:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:32.221 17:11:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.221 17:11:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:32.221 17:11:41 -- common/autotest_common.sh@10 -- # set +x 00:07:32.221 [2024-04-24 17:11:41.377864] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:32.221 [2024-04-24 17:11:41.377913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.221 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.221 [2024-04-24 17:11:41.436193] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.481 [2024-04-24 17:11:41.511782] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.481 [2024-04-24 17:11:41.511823] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.481 [2024-04-24 17:11:41.511836] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.481 [2024-04-24 17:11:41.511842] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.481 [2024-04-24 17:11:41.511862] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.481 [2024-04-24 17:11:41.511915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.481 [2024-04-24 17:11:41.512012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.481 [2024-04-24 17:11:41.512081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.481 [2024-04-24 17:11:41.512082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.049 17:11:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:33.049 17:11:42 -- common/autotest_common.sh@850 -- # return 0 00:07:33.049 17:11:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:33.049 17:11:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:33.049 17:11:42 -- common/autotest_common.sh@10 -- # set +x 00:07:33.049 17:11:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.049 17:11:42 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:33.049 17:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.049 17:11:42 -- common/autotest_common.sh@10 -- # set +x 00:07:33.049 [2024-04-24 17:11:42.216578] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:33.049 [2024-04-24 17:11:42.237336] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17e2f60/0x17e7450) succeed. 00:07:33.049 [2024-04-24 17:11:42.247492] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17e4550/0x1828ae0) succeed. 00:07:33.308 17:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.308 17:11:42 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:33.308 17:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.308 17:11:42 -- common/autotest_common.sh@10 -- # set +x 00:07:33.308 17:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.308 17:11:42 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:33.308 17:11:42 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:33.308 17:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.308 17:11:42 -- common/autotest_common.sh@10 -- # set +x 00:07:33.308 17:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.308 17:11:42 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:33.308 17:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.308 17:11:42 -- common/autotest_common.sh@10 -- # set +x 00:07:33.308 17:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.309 17:11:42 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:33.309 17:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.309 17:11:42 -- common/autotest_common.sh@10 -- # set +x 00:07:33.309 [2024-04-24 17:11:42.387239] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:33.309 17:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.309 17:11:42 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:33.309 17:11:42 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:33.309 17:11:42 -- target/connect_disconnect.sh@34 -- # set +x 00:07:37.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:49.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.356 17:12:02 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:53.356 17:12:02 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:53.356 17:12:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:53.356 17:12:02 -- nvmf/common.sh@117 -- # sync 00:07:53.356 17:12:02 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:53.356 17:12:02 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:53.356 17:12:02 -- nvmf/common.sh@120 -- # set +e 00:07:53.356 17:12:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.356 17:12:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:53.356 rmmod nvme_rdma 00:07:53.356 rmmod nvme_fabrics 00:07:53.356 17:12:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.356 17:12:02 -- nvmf/common.sh@124 -- # set -e 00:07:53.356 17:12:02 -- nvmf/common.sh@125 -- # return 0 00:07:53.356 17:12:02 -- nvmf/common.sh@478 -- # '[' -n 2964998 ']' 00:07:53.356 17:12:02 -- nvmf/common.sh@479 -- # killprocess 2964998 00:07:53.356 17:12:02 -- common/autotest_common.sh@936 -- # '[' -z 2964998 ']' 00:07:53.356 17:12:02 -- common/autotest_common.sh@940 -- # kill -0 2964998 00:07:53.356 17:12:02 -- common/autotest_common.sh@941 -- # uname 00:07:53.356 17:12:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:53.356 17:12:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2964998 00:07:53.356 17:12:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:53.356 17:12:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:53.356 17:12:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2964998' 00:07:53.356 killing process with pid 2964998 00:07:53.356 17:12:02 -- common/autotest_common.sh@955 -- # kill 2964998 00:07:53.356 17:12:02 -- common/autotest_common.sh@960 -- # wait 2964998 00:07:53.356 17:12:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:53.356 17:12:02 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:53.356 00:07:53.356 real 0m26.625s 00:07:53.356 user 1m25.161s 00:07:53.356 sys 0m4.850s 00:07:53.356 17:12:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:53.356 17:12:02 -- common/autotest_common.sh@10 -- # set +x 00:07:53.356 ************************************ 00:07:53.356 END TEST nvmf_connect_disconnect 00:07:53.356 ************************************ 00:07:53.356 17:12:02 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:07:53.356 17:12:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:53.356 17:12:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.356 17:12:02 -- common/autotest_common.sh@10 -- # set +x 00:07:53.619 ************************************ 00:07:53.619 START TEST nvmf_multitarget 00:07:53.619 ************************************ 00:07:53.619 17:12:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:07:53.619 * Looking for test storage... 00:07:53.619 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:53.619 17:12:02 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.619 17:12:02 -- nvmf/common.sh@7 -- # uname -s 00:07:53.619 17:12:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.619 17:12:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.619 17:12:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.619 17:12:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.619 17:12:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.619 17:12:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.619 17:12:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.619 17:12:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.619 17:12:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.619 17:12:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.619 17:12:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:53.619 17:12:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:53.619 17:12:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.619 17:12:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.619 17:12:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.619 17:12:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.619 17:12:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:53.619 17:12:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.619 17:12:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.619 17:12:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.619 17:12:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.619 17:12:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.619 17:12:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.619 17:12:02 -- paths/export.sh@5 -- # export PATH 00:07:53.619 17:12:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.619 17:12:02 -- nvmf/common.sh@47 -- # : 0 00:07:53.619 17:12:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.619 17:12:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.619 17:12:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.619 17:12:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.619 17:12:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.619 17:12:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.619 17:12:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.619 17:12:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.619 17:12:02 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:53.619 17:12:02 -- target/multitarget.sh@15 -- # nvmftestinit 00:07:53.619 17:12:02 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:53.619 17:12:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.620 17:12:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:53.620 17:12:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:53.620 17:12:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:53.620 17:12:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.620 17:12:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.620 17:12:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.620 17:12:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:53.620 17:12:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:53.620 17:12:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:53.620 17:12:02 -- common/autotest_common.sh@10 -- # set +x 00:07:58.897 17:12:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:58.897 17:12:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:58.897 17:12:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:58.897 17:12:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:58.897 17:12:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:58.897 17:12:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:58.897 17:12:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:58.897 17:12:07 -- nvmf/common.sh@295 -- # net_devs=() 00:07:58.897 17:12:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:58.897 17:12:07 -- nvmf/common.sh@296 -- # e810=() 00:07:58.897 17:12:07 -- nvmf/common.sh@296 -- # local -ga e810 00:07:58.897 17:12:07 -- nvmf/common.sh@297 -- # x722=() 00:07:58.897 17:12:07 -- nvmf/common.sh@297 -- # local -ga x722 00:07:58.897 17:12:07 -- nvmf/common.sh@298 -- # mlx=() 00:07:58.897 17:12:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:58.897 17:12:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.897 17:12:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.897 17:12:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.897 17:12:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.897 17:12:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.897 17:12:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.897 17:12:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.897 17:12:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.897 17:12:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.897 17:12:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.897 17:12:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.897 17:12:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:58.897 17:12:07 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:58.897 17:12:07 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:58.897 17:12:07 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:58.897 17:12:07 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:58.897 17:12:07 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:58.897 17:12:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:58.897 17:12:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:58.897 17:12:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:58.897 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:58.897 17:12:07 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:58.897 17:12:07 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:58.897 17:12:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:58.897 17:12:07 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:58.897 17:12:07 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:58.897 17:12:07 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:58.897 17:12:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:58.897 17:12:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:58.897 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:58.897 17:12:07 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:58.897 17:12:07 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:58.897 17:12:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:58.897 17:12:07 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:58.897 17:12:07 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:58.897 17:12:07 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:58.897 17:12:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:58.897 17:12:07 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:58.897 17:12:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:58.897 17:12:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.897 17:12:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:58.897 17:12:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.897 17:12:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:58.897 Found net devices under 0000:da:00.0: mlx_0_0 00:07:58.897 17:12:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.897 17:12:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:58.897 17:12:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.897 17:12:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:58.897 17:12:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.897 17:12:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:58.897 Found net devices under 0000:da:00.1: mlx_0_1 00:07:58.897 17:12:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.897 17:12:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:58.897 17:12:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:58.897 17:12:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:58.898 17:12:07 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:58.898 17:12:07 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:58.898 17:12:07 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:58.898 17:12:07 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:58.898 17:12:07 -- nvmf/common.sh@58 -- # uname 00:07:58.898 17:12:07 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:58.898 17:12:07 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:58.898 17:12:07 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:58.898 17:12:07 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:58.898 17:12:07 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:58.898 17:12:07 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:58.898 17:12:07 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:58.898 17:12:07 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:58.898 17:12:07 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:58.898 17:12:07 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:58.898 17:12:07 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:58.898 17:12:07 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:58.898 17:12:07 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:58.898 17:12:07 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:58.898 17:12:07 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:58.898 17:12:07 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:58.898 17:12:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:58.898 17:12:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.898 17:12:07 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:58.898 17:12:07 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:58.898 17:12:07 -- nvmf/common.sh@105 -- # continue 2 00:07:58.898 17:12:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:58.898 17:12:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.898 17:12:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:58.898 17:12:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.898 17:12:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:58.898 17:12:07 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:58.898 17:12:07 -- nvmf/common.sh@105 -- # continue 2 00:07:58.898 17:12:07 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:58.898 17:12:07 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:58.898 17:12:07 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:58.898 17:12:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:58.898 17:12:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:58.898 17:12:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:58.898 17:12:07 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:58.898 17:12:07 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:58.898 17:12:07 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:58.898 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:58.898 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:58.898 altname enp218s0f0np0 00:07:58.898 altname ens818f0np0 00:07:58.898 inet 192.168.100.8/24 scope global mlx_0_0 00:07:58.898 valid_lft forever preferred_lft forever 00:07:58.898 17:12:07 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:58.898 17:12:07 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:58.898 17:12:07 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:58.898 17:12:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:58.898 17:12:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:58.898 17:12:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:58.898 17:12:07 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:58.898 17:12:07 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:58.898 17:12:07 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:58.898 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:58.898 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:58.898 altname enp218s0f1np1 00:07:58.898 altname ens818f1np1 00:07:58.898 inet 192.168.100.9/24 scope global mlx_0_1 00:07:58.898 valid_lft forever preferred_lft forever 00:07:58.898 17:12:07 -- nvmf/common.sh@411 -- # return 0 00:07:58.898 17:12:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:58.898 17:12:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:58.898 17:12:07 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:58.898 17:12:07 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:58.898 17:12:07 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:58.898 17:12:07 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:58.898 17:12:07 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:58.898 17:12:07 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:58.898 17:12:07 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:58.898 17:12:07 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:58.898 17:12:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:58.898 17:12:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.898 17:12:07 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:58.898 17:12:07 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:58.898 17:12:07 -- nvmf/common.sh@105 -- # continue 2 00:07:58.898 17:12:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:58.898 17:12:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.898 17:12:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:58.898 17:12:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.898 17:12:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:58.898 17:12:07 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:58.898 17:12:07 -- nvmf/common.sh@105 -- # continue 2 00:07:58.898 17:12:07 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:58.898 17:12:07 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:58.898 17:12:07 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:58.898 17:12:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:58.898 17:12:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:58.898 17:12:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:58.898 17:12:07 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:58.898 17:12:07 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:58.898 17:12:07 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:58.898 17:12:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:58.898 17:12:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:58.898 17:12:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:58.898 17:12:07 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:58.898 192.168.100.9' 00:07:58.898 17:12:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:58.898 192.168.100.9' 00:07:58.898 17:12:07 -- nvmf/common.sh@446 -- # head -n 1 00:07:58.898 17:12:07 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:58.898 17:12:07 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:58.898 192.168.100.9' 00:07:58.898 17:12:07 -- nvmf/common.sh@447 -- # tail -n +2 00:07:58.898 17:12:07 -- nvmf/common.sh@447 -- # head -n 1 00:07:58.898 17:12:07 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:58.898 17:12:07 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:58.898 17:12:07 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:58.898 17:12:07 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:58.898 17:12:07 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:58.898 17:12:07 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:58.898 17:12:07 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:58.898 17:12:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:58.898 17:12:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:58.899 17:12:07 -- common/autotest_common.sh@10 -- # set +x 00:07:58.899 17:12:07 -- nvmf/common.sh@470 -- # nvmfpid=2967706 00:07:58.899 17:12:07 -- nvmf/common.sh@471 -- # waitforlisten 2967706 00:07:58.899 17:12:07 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:58.899 17:12:07 -- common/autotest_common.sh@817 -- # '[' -z 2967706 ']' 00:07:58.899 17:12:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.899 17:12:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:58.899 17:12:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.899 17:12:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:58.899 17:12:07 -- common/autotest_common.sh@10 -- # set +x 00:07:58.899 [2024-04-24 17:12:07.769719] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:58.899 [2024-04-24 17:12:07.769762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.899 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.899 [2024-04-24 17:12:07.820978] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.899 [2024-04-24 17:12:07.899555] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.899 [2024-04-24 17:12:07.899596] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.899 [2024-04-24 17:12:07.899603] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.899 [2024-04-24 17:12:07.899609] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.899 [2024-04-24 17:12:07.899614] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.899 [2024-04-24 17:12:07.899656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.899 [2024-04-24 17:12:07.899758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.899 [2024-04-24 17:12:07.899855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.899 [2024-04-24 17:12:07.899857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.467 17:12:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:59.467 17:12:08 -- common/autotest_common.sh@850 -- # return 0 00:07:59.467 17:12:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:59.467 17:12:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:59.467 17:12:08 -- common/autotest_common.sh@10 -- # set +x 00:07:59.467 17:12:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.467 17:12:08 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:59.467 17:12:08 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:59.467 17:12:08 -- target/multitarget.sh@21 -- # jq length 00:07:59.725 17:12:08 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:59.726 17:12:08 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:59.726 "nvmf_tgt_1" 00:07:59.726 17:12:08 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:59.726 "nvmf_tgt_2" 00:07:59.726 17:12:08 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:59.726 17:12:08 -- target/multitarget.sh@28 -- # jq length 00:07:59.984 17:12:09 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:59.984 17:12:09 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:59.984 true 00:07:59.984 17:12:09 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:00.244 true 00:08:00.244 17:12:09 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:00.244 17:12:09 -- target/multitarget.sh@35 -- # jq length 00:08:00.244 17:12:09 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:00.244 17:12:09 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:00.244 17:12:09 -- target/multitarget.sh@41 -- # nvmftestfini 00:08:00.244 17:12:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:00.244 17:12:09 -- nvmf/common.sh@117 -- # sync 00:08:00.244 17:12:09 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:00.244 17:12:09 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:00.244 17:12:09 -- nvmf/common.sh@120 -- # set +e 00:08:00.244 17:12:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.244 17:12:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:00.244 rmmod nvme_rdma 00:08:00.244 rmmod nvme_fabrics 00:08:00.244 17:12:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.244 17:12:09 -- nvmf/common.sh@124 -- # set -e 00:08:00.244 17:12:09 -- nvmf/common.sh@125 -- # return 0 00:08:00.244 17:12:09 -- nvmf/common.sh@478 -- # '[' -n 2967706 ']' 00:08:00.244 17:12:09 -- nvmf/common.sh@479 -- # killprocess 2967706 00:08:00.244 17:12:09 -- common/autotest_common.sh@936 -- # '[' -z 2967706 ']' 00:08:00.244 17:12:09 -- common/autotest_common.sh@940 -- # kill -0 2967706 00:08:00.244 17:12:09 -- common/autotest_common.sh@941 -- # uname 00:08:00.244 17:12:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:00.244 17:12:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2967706 00:08:00.244 17:12:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:00.244 17:12:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:00.244 17:12:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2967706' 00:08:00.244 killing process with pid 2967706 00:08:00.244 17:12:09 -- common/autotest_common.sh@955 -- # kill 2967706 00:08:00.244 17:12:09 -- common/autotest_common.sh@960 -- # wait 2967706 00:08:00.503 17:12:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:00.503 17:12:09 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:00.503 00:08:00.503 real 0m6.943s 00:08:00.503 user 0m8.952s 00:08:00.503 sys 0m4.123s 00:08:00.503 17:12:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:00.503 17:12:09 -- common/autotest_common.sh@10 -- # set +x 00:08:00.503 ************************************ 00:08:00.503 END TEST nvmf_multitarget 00:08:00.503 ************************************ 00:08:00.503 17:12:09 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:08:00.503 17:12:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:00.503 17:12:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.503 17:12:09 -- common/autotest_common.sh@10 -- # set +x 00:08:00.763 ************************************ 00:08:00.763 START TEST nvmf_rpc 00:08:00.763 ************************************ 00:08:00.763 17:12:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:08:00.763 * Looking for test storage... 00:08:00.763 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:00.763 17:12:09 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.763 17:12:09 -- nvmf/common.sh@7 -- # uname -s 00:08:00.763 17:12:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.763 17:12:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.763 17:12:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.763 17:12:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.763 17:12:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.763 17:12:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.763 17:12:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.763 17:12:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.763 17:12:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.763 17:12:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.763 17:12:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:00.763 17:12:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:00.763 17:12:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.763 17:12:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.763 17:12:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.763 17:12:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.763 17:12:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:00.763 17:12:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.763 17:12:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.763 17:12:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.763 17:12:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.763 17:12:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.763 17:12:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.763 17:12:09 -- paths/export.sh@5 -- # export PATH 00:08:00.763 17:12:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.763 17:12:09 -- nvmf/common.sh@47 -- # : 0 00:08:00.763 17:12:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.763 17:12:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.763 17:12:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.763 17:12:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.763 17:12:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.763 17:12:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.763 17:12:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.763 17:12:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.763 17:12:09 -- target/rpc.sh@11 -- # loops=5 00:08:00.763 17:12:09 -- target/rpc.sh@23 -- # nvmftestinit 00:08:00.763 17:12:09 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:00.763 17:12:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.763 17:12:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:00.763 17:12:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:00.763 17:12:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:00.763 17:12:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.763 17:12:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.763 17:12:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.763 17:12:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:00.763 17:12:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:00.763 17:12:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.763 17:12:09 -- common/autotest_common.sh@10 -- # set +x 00:08:06.084 17:12:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:06.084 17:12:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:06.084 17:12:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:06.084 17:12:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:06.084 17:12:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:06.084 17:12:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:06.084 17:12:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:06.084 17:12:14 -- nvmf/common.sh@295 -- # net_devs=() 00:08:06.084 17:12:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:06.084 17:12:14 -- nvmf/common.sh@296 -- # e810=() 00:08:06.084 17:12:14 -- nvmf/common.sh@296 -- # local -ga e810 00:08:06.084 17:12:14 -- nvmf/common.sh@297 -- # x722=() 00:08:06.084 17:12:14 -- nvmf/common.sh@297 -- # local -ga x722 00:08:06.084 17:12:14 -- nvmf/common.sh@298 -- # mlx=() 00:08:06.084 17:12:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:06.084 17:12:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.084 17:12:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.084 17:12:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.084 17:12:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.084 17:12:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.084 17:12:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.084 17:12:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.084 17:12:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.084 17:12:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.084 17:12:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.084 17:12:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.084 17:12:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:06.084 17:12:14 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:06.084 17:12:14 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:06.084 17:12:14 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:06.084 17:12:14 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:06.084 17:12:14 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:06.084 17:12:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:06.084 17:12:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.085 17:12:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:06.085 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:06.085 17:12:14 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:06.085 17:12:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.085 17:12:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:06.085 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:06.085 17:12:14 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:06.085 17:12:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:06.085 17:12:14 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.085 17:12:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.085 17:12:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:06.085 17:12:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.085 17:12:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:06.085 Found net devices under 0000:da:00.0: mlx_0_0 00:08:06.085 17:12:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.085 17:12:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.085 17:12:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.085 17:12:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:06.085 17:12:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.085 17:12:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:06.085 Found net devices under 0000:da:00.1: mlx_0_1 00:08:06.085 17:12:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.085 17:12:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:06.085 17:12:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:06.085 17:12:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:06.085 17:12:14 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:06.085 17:12:14 -- nvmf/common.sh@58 -- # uname 00:08:06.085 17:12:14 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:06.085 17:12:14 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:06.085 17:12:14 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:06.085 17:12:14 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:06.085 17:12:14 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:06.085 17:12:14 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:06.085 17:12:14 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:06.085 17:12:14 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:06.085 17:12:14 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:06.085 17:12:14 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:06.085 17:12:14 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:06.085 17:12:14 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:06.085 17:12:14 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:06.085 17:12:14 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:06.085 17:12:14 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:06.085 17:12:14 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:06.085 17:12:14 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:06.085 17:12:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.085 17:12:14 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:06.085 17:12:14 -- nvmf/common.sh@105 -- # continue 2 00:08:06.085 17:12:14 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:06.085 17:12:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.085 17:12:14 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.085 17:12:14 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:06.085 17:12:14 -- nvmf/common.sh@105 -- # continue 2 00:08:06.085 17:12:14 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:06.085 17:12:14 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:06.085 17:12:14 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:06.085 17:12:14 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:06.085 17:12:14 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:06.085 17:12:14 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:06.085 17:12:14 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:06.085 17:12:14 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:06.085 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:06.085 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:06.085 altname enp218s0f0np0 00:08:06.085 altname ens818f0np0 00:08:06.085 inet 192.168.100.8/24 scope global mlx_0_0 00:08:06.085 valid_lft forever preferred_lft forever 00:08:06.085 17:12:14 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:06.085 17:12:14 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:06.085 17:12:14 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:06.085 17:12:14 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:06.085 17:12:14 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:06.085 17:12:14 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:06.085 17:12:14 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:06.085 17:12:14 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:06.085 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:06.085 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:06.085 altname enp218s0f1np1 00:08:06.085 altname ens818f1np1 00:08:06.085 inet 192.168.100.9/24 scope global mlx_0_1 00:08:06.085 valid_lft forever preferred_lft forever 00:08:06.085 17:12:14 -- nvmf/common.sh@411 -- # return 0 00:08:06.085 17:12:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:06.085 17:12:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:06.085 17:12:14 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:06.085 17:12:14 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:06.085 17:12:14 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:06.085 17:12:14 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:06.085 17:12:14 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:06.085 17:12:14 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:06.085 17:12:14 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:06.085 17:12:14 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:06.085 17:12:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.085 17:12:14 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:06.085 17:12:14 -- nvmf/common.sh@105 -- # continue 2 00:08:06.085 17:12:14 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:06.085 17:12:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.085 17:12:14 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.085 17:12:14 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:06.085 17:12:14 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:06.085 17:12:14 -- nvmf/common.sh@105 -- # continue 2 00:08:06.085 17:12:14 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:06.085 17:12:14 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:06.085 17:12:14 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:06.085 17:12:14 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:06.085 17:12:14 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:06.085 17:12:14 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:06.085 17:12:14 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:06.085 17:12:14 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:06.085 17:12:14 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:06.085 17:12:14 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:06.085 17:12:14 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:06.085 17:12:14 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:06.085 17:12:14 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:06.085 192.168.100.9' 00:08:06.085 17:12:14 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:06.085 192.168.100.9' 00:08:06.085 17:12:14 -- nvmf/common.sh@446 -- # head -n 1 00:08:06.085 17:12:14 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:06.085 17:12:14 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:06.085 192.168.100.9' 00:08:06.085 17:12:14 -- nvmf/common.sh@447 -- # head -n 1 00:08:06.085 17:12:14 -- nvmf/common.sh@447 -- # tail -n +2 00:08:06.085 17:12:14 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:06.085 17:12:14 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:06.085 17:12:14 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:06.085 17:12:14 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:06.085 17:12:14 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:06.085 17:12:14 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:06.085 17:12:14 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:06.085 17:12:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:06.085 17:12:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:06.085 17:12:14 -- common/autotest_common.sh@10 -- # set +x 00:08:06.085 17:12:14 -- nvmf/common.sh@470 -- # nvmfpid=2970362 00:08:06.085 17:12:14 -- nvmf/common.sh@471 -- # waitforlisten 2970362 00:08:06.085 17:12:14 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:06.085 17:12:14 -- common/autotest_common.sh@817 -- # '[' -z 2970362 ']' 00:08:06.085 17:12:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.085 17:12:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:06.086 17:12:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.086 17:12:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:06.086 17:12:14 -- common/autotest_common.sh@10 -- # set +x 00:08:06.086 [2024-04-24 17:12:14.566419] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:06.086 [2024-04-24 17:12:14.566467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.086 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.086 [2024-04-24 17:12:14.621771] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.086 [2024-04-24 17:12:14.693258] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.086 [2024-04-24 17:12:14.693298] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.086 [2024-04-24 17:12:14.693308] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.086 [2024-04-24 17:12:14.693313] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.086 [2024-04-24 17:12:14.693318] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.086 [2024-04-24 17:12:14.693383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.086 [2024-04-24 17:12:14.693501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.086 [2024-04-24 17:12:14.693572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.086 [2024-04-24 17:12:14.693573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.345 17:12:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:06.345 17:12:15 -- common/autotest_common.sh@850 -- # return 0 00:08:06.345 17:12:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:06.345 17:12:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:06.345 17:12:15 -- common/autotest_common.sh@10 -- # set +x 00:08:06.345 17:12:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.345 17:12:15 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:06.345 17:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.345 17:12:15 -- common/autotest_common.sh@10 -- # set +x 00:08:06.345 17:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.345 17:12:15 -- target/rpc.sh@26 -- # stats='{ 00:08:06.345 "tick_rate": 2100000000, 00:08:06.345 "poll_groups": [ 00:08:06.345 { 00:08:06.345 "name": "nvmf_tgt_poll_group_0", 00:08:06.345 "admin_qpairs": 0, 00:08:06.345 "io_qpairs": 0, 00:08:06.345 "current_admin_qpairs": 0, 00:08:06.345 "current_io_qpairs": 0, 00:08:06.345 "pending_bdev_io": 0, 00:08:06.345 "completed_nvme_io": 0, 00:08:06.345 "transports": [] 00:08:06.345 }, 00:08:06.345 { 00:08:06.345 "name": "nvmf_tgt_poll_group_1", 00:08:06.345 "admin_qpairs": 0, 00:08:06.345 "io_qpairs": 0, 00:08:06.345 "current_admin_qpairs": 0, 00:08:06.345 "current_io_qpairs": 0, 00:08:06.345 "pending_bdev_io": 0, 00:08:06.345 "completed_nvme_io": 0, 00:08:06.345 "transports": [] 00:08:06.345 }, 00:08:06.345 { 00:08:06.345 "name": "nvmf_tgt_poll_group_2", 00:08:06.345 "admin_qpairs": 0, 00:08:06.345 "io_qpairs": 0, 00:08:06.345 "current_admin_qpairs": 0, 00:08:06.345 "current_io_qpairs": 0, 00:08:06.345 "pending_bdev_io": 0, 00:08:06.345 "completed_nvme_io": 0, 00:08:06.345 "transports": [] 00:08:06.345 }, 00:08:06.345 { 00:08:06.345 "name": "nvmf_tgt_poll_group_3", 00:08:06.345 "admin_qpairs": 0, 00:08:06.345 "io_qpairs": 0, 00:08:06.345 "current_admin_qpairs": 0, 00:08:06.345 "current_io_qpairs": 0, 00:08:06.345 "pending_bdev_io": 0, 00:08:06.345 "completed_nvme_io": 0, 00:08:06.345 "transports": [] 00:08:06.345 } 00:08:06.345 ] 00:08:06.345 }' 00:08:06.345 17:12:15 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:06.345 17:12:15 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:06.345 17:12:15 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:06.345 17:12:15 -- target/rpc.sh@15 -- # wc -l 00:08:06.345 17:12:15 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:06.345 17:12:15 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:06.345 17:12:15 -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:06.345 17:12:15 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:06.345 17:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.345 17:12:15 -- common/autotest_common.sh@10 -- # set +x 00:08:06.345 [2024-04-24 17:12:15.534054] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14aefc0/0x14b34b0) succeed. 00:08:06.345 [2024-04-24 17:12:15.544302] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14b05b0/0x14f4b40) succeed. 00:08:06.605 17:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.605 17:12:15 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:06.605 17:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.605 17:12:15 -- common/autotest_common.sh@10 -- # set +x 00:08:06.605 17:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.605 17:12:15 -- target/rpc.sh@33 -- # stats='{ 00:08:06.605 "tick_rate": 2100000000, 00:08:06.605 "poll_groups": [ 00:08:06.605 { 00:08:06.605 "name": "nvmf_tgt_poll_group_0", 00:08:06.605 "admin_qpairs": 0, 00:08:06.605 "io_qpairs": 0, 00:08:06.605 "current_admin_qpairs": 0, 00:08:06.605 "current_io_qpairs": 0, 00:08:06.605 "pending_bdev_io": 0, 00:08:06.605 "completed_nvme_io": 0, 00:08:06.605 "transports": [ 00:08:06.605 { 00:08:06.605 "trtype": "RDMA", 00:08:06.605 "pending_data_buffer": 0, 00:08:06.605 "devices": [ 00:08:06.605 { 00:08:06.605 "name": "mlx5_0", 00:08:06.605 "polls": 15489, 00:08:06.605 "idle_polls": 15489, 00:08:06.605 "completions": 0, 00:08:06.605 "requests": 0, 00:08:06.605 "request_latency": 0, 00:08:06.605 "pending_free_request": 0, 00:08:06.605 "pending_rdma_read": 0, 00:08:06.605 "pending_rdma_write": 0, 00:08:06.605 "pending_rdma_send": 0, 00:08:06.605 "total_send_wrs": 0, 00:08:06.605 "send_doorbell_updates": 0, 00:08:06.605 "total_recv_wrs": 4096, 00:08:06.605 "recv_doorbell_updates": 1 00:08:06.605 }, 00:08:06.605 { 00:08:06.605 "name": "mlx5_1", 00:08:06.605 "polls": 15489, 00:08:06.605 "idle_polls": 15489, 00:08:06.605 "completions": 0, 00:08:06.605 "requests": 0, 00:08:06.605 "request_latency": 0, 00:08:06.605 "pending_free_request": 0, 00:08:06.605 "pending_rdma_read": 0, 00:08:06.605 "pending_rdma_write": 0, 00:08:06.605 "pending_rdma_send": 0, 00:08:06.605 "total_send_wrs": 0, 00:08:06.605 "send_doorbell_updates": 0, 00:08:06.605 "total_recv_wrs": 4096, 00:08:06.605 "recv_doorbell_updates": 1 00:08:06.605 } 00:08:06.605 ] 00:08:06.605 } 00:08:06.605 ] 00:08:06.605 }, 00:08:06.605 { 00:08:06.605 "name": "nvmf_tgt_poll_group_1", 00:08:06.605 "admin_qpairs": 0, 00:08:06.605 "io_qpairs": 0, 00:08:06.605 "current_admin_qpairs": 0, 00:08:06.605 "current_io_qpairs": 0, 00:08:06.605 "pending_bdev_io": 0, 00:08:06.605 "completed_nvme_io": 0, 00:08:06.605 "transports": [ 00:08:06.605 { 00:08:06.605 "trtype": "RDMA", 00:08:06.605 "pending_data_buffer": 0, 00:08:06.605 "devices": [ 00:08:06.605 { 00:08:06.605 "name": "mlx5_0", 00:08:06.605 "polls": 10287, 00:08:06.605 "idle_polls": 10287, 00:08:06.605 "completions": 0, 00:08:06.605 "requests": 0, 00:08:06.605 "request_latency": 0, 00:08:06.605 "pending_free_request": 0, 00:08:06.605 "pending_rdma_read": 0, 00:08:06.605 "pending_rdma_write": 0, 00:08:06.605 "pending_rdma_send": 0, 00:08:06.605 "total_send_wrs": 0, 00:08:06.605 "send_doorbell_updates": 0, 00:08:06.605 "total_recv_wrs": 4096, 00:08:06.605 "recv_doorbell_updates": 1 00:08:06.605 }, 00:08:06.605 { 00:08:06.605 "name": "mlx5_1", 00:08:06.605 "polls": 10287, 00:08:06.605 "idle_polls": 10287, 00:08:06.605 "completions": 0, 00:08:06.605 "requests": 0, 00:08:06.605 "request_latency": 0, 00:08:06.605 "pending_free_request": 0, 00:08:06.605 "pending_rdma_read": 0, 00:08:06.605 "pending_rdma_write": 0, 00:08:06.605 "pending_rdma_send": 0, 00:08:06.605 "total_send_wrs": 0, 00:08:06.605 "send_doorbell_updates": 0, 00:08:06.605 "total_recv_wrs": 4096, 00:08:06.605 "recv_doorbell_updates": 1 00:08:06.605 } 00:08:06.605 ] 00:08:06.605 } 00:08:06.605 ] 00:08:06.605 }, 00:08:06.605 { 00:08:06.605 "name": "nvmf_tgt_poll_group_2", 00:08:06.605 "admin_qpairs": 0, 00:08:06.605 "io_qpairs": 0, 00:08:06.605 "current_admin_qpairs": 0, 00:08:06.605 "current_io_qpairs": 0, 00:08:06.605 "pending_bdev_io": 0, 00:08:06.605 "completed_nvme_io": 0, 00:08:06.605 "transports": [ 00:08:06.605 { 00:08:06.605 "trtype": "RDMA", 00:08:06.605 "pending_data_buffer": 0, 00:08:06.605 "devices": [ 00:08:06.605 { 00:08:06.605 "name": "mlx5_0", 00:08:06.605 "polls": 5586, 00:08:06.605 "idle_polls": 5586, 00:08:06.605 "completions": 0, 00:08:06.605 "requests": 0, 00:08:06.605 "request_latency": 0, 00:08:06.605 "pending_free_request": 0, 00:08:06.605 "pending_rdma_read": 0, 00:08:06.605 "pending_rdma_write": 0, 00:08:06.605 "pending_rdma_send": 0, 00:08:06.605 "total_send_wrs": 0, 00:08:06.605 "send_doorbell_updates": 0, 00:08:06.605 "total_recv_wrs": 4096, 00:08:06.605 "recv_doorbell_updates": 1 00:08:06.605 }, 00:08:06.605 { 00:08:06.605 "name": "mlx5_1", 00:08:06.605 "polls": 5586, 00:08:06.605 "idle_polls": 5586, 00:08:06.605 "completions": 0, 00:08:06.605 "requests": 0, 00:08:06.605 "request_latency": 0, 00:08:06.605 "pending_free_request": 0, 00:08:06.605 "pending_rdma_read": 0, 00:08:06.605 "pending_rdma_write": 0, 00:08:06.605 "pending_rdma_send": 0, 00:08:06.605 "total_send_wrs": 0, 00:08:06.605 "send_doorbell_updates": 0, 00:08:06.605 "total_recv_wrs": 4096, 00:08:06.605 "recv_doorbell_updates": 1 00:08:06.605 } 00:08:06.605 ] 00:08:06.605 } 00:08:06.605 ] 00:08:06.605 }, 00:08:06.605 { 00:08:06.605 "name": "nvmf_tgt_poll_group_3", 00:08:06.605 "admin_qpairs": 0, 00:08:06.605 "io_qpairs": 0, 00:08:06.605 "current_admin_qpairs": 0, 00:08:06.605 "current_io_qpairs": 0, 00:08:06.606 "pending_bdev_io": 0, 00:08:06.606 "completed_nvme_io": 0, 00:08:06.606 "transports": [ 00:08:06.606 { 00:08:06.606 "trtype": "RDMA", 00:08:06.606 "pending_data_buffer": 0, 00:08:06.606 "devices": [ 00:08:06.606 { 00:08:06.606 "name": "mlx5_0", 00:08:06.606 "polls": 931, 00:08:06.606 "idle_polls": 931, 00:08:06.606 "completions": 0, 00:08:06.606 "requests": 0, 00:08:06.606 "request_latency": 0, 00:08:06.606 "pending_free_request": 0, 00:08:06.606 "pending_rdma_read": 0, 00:08:06.606 "pending_rdma_write": 0, 00:08:06.606 "pending_rdma_send": 0, 00:08:06.606 "total_send_wrs": 0, 00:08:06.606 "send_doorbell_updates": 0, 00:08:06.606 "total_recv_wrs": 4096, 00:08:06.606 "recv_doorbell_updates": 1 00:08:06.606 }, 00:08:06.606 { 00:08:06.606 "name": "mlx5_1", 00:08:06.606 "polls": 931, 00:08:06.606 "idle_polls": 931, 00:08:06.606 "completions": 0, 00:08:06.606 "requests": 0, 00:08:06.606 "request_latency": 0, 00:08:06.606 "pending_free_request": 0, 00:08:06.606 "pending_rdma_read": 0, 00:08:06.606 "pending_rdma_write": 0, 00:08:06.606 "pending_rdma_send": 0, 00:08:06.606 "total_send_wrs": 0, 00:08:06.606 "send_doorbell_updates": 0, 00:08:06.606 "total_recv_wrs": 4096, 00:08:06.606 "recv_doorbell_updates": 1 00:08:06.606 } 00:08:06.606 ] 00:08:06.606 } 00:08:06.606 ] 00:08:06.606 } 00:08:06.606 ] 00:08:06.606 }' 00:08:06.606 17:12:15 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:06.606 17:12:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:06.606 17:12:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:06.606 17:12:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:06.606 17:12:15 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:06.606 17:12:15 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:06.606 17:12:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:06.606 17:12:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:06.606 17:12:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:06.606 17:12:15 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:06.606 17:12:15 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:08:06.606 17:12:15 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:08:06.606 17:12:15 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:08:06.606 17:12:15 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:08:06.606 17:12:15 -- target/rpc.sh@15 -- # wc -l 00:08:06.606 17:12:15 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:08:06.606 17:12:15 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:08:06.866 17:12:15 -- target/rpc.sh@41 -- # transport_type=RDMA 00:08:06.866 17:12:15 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:08:06.866 17:12:15 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:08:06.866 17:12:15 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:08:06.866 17:12:15 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:08:06.866 17:12:15 -- target/rpc.sh@15 -- # wc -l 00:08:06.866 17:12:15 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:08:06.866 17:12:15 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:06.866 17:12:15 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:06.866 17:12:15 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:06.866 17:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.866 17:12:15 -- common/autotest_common.sh@10 -- # set +x 00:08:06.866 Malloc1 00:08:06.866 17:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.866 17:12:15 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:06.866 17:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.866 17:12:15 -- common/autotest_common.sh@10 -- # set +x 00:08:06.866 17:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.866 17:12:15 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:06.866 17:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.866 17:12:15 -- common/autotest_common.sh@10 -- # set +x 00:08:06.866 17:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.866 17:12:15 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:06.866 17:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.866 17:12:15 -- common/autotest_common.sh@10 -- # set +x 00:08:06.866 17:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.866 17:12:15 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:06.866 17:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.866 17:12:15 -- common/autotest_common.sh@10 -- # set +x 00:08:06.866 [2024-04-24 17:12:15.955257] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:06.866 17:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.866 17:12:15 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:06.866 17:12:15 -- common/autotest_common.sh@638 -- # local es=0 00:08:06.866 17:12:15 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:06.866 17:12:15 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:06.866 17:12:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:06.866 17:12:15 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:06.866 17:12:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:06.866 17:12:15 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:06.866 17:12:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:06.866 17:12:15 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:06.866 17:12:15 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:06.866 17:12:15 -- common/autotest_common.sh@641 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:06.866 [2024-04-24 17:12:16.001194] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:08:06.866 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:06.866 could not add new controller: failed to write to nvme-fabrics device 00:08:06.866 17:12:16 -- common/autotest_common.sh@641 -- # es=1 00:08:06.866 17:12:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:06.866 17:12:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:06.866 17:12:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:06.866 17:12:16 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:06.866 17:12:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.866 17:12:16 -- common/autotest_common.sh@10 -- # set +x 00:08:06.866 17:12:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.866 17:12:16 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:07.866 17:12:17 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:07.866 17:12:17 -- common/autotest_common.sh@1184 -- # local i=0 00:08:07.866 17:12:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:07.866 17:12:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:07.866 17:12:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:09.771 17:12:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:09.771 17:12:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:09.771 17:12:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:10.029 17:12:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:10.029 17:12:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:10.029 17:12:19 -- common/autotest_common.sh@1194 -- # return 0 00:08:10.029 17:12:19 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:10.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.965 17:12:19 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:10.965 17:12:19 -- common/autotest_common.sh@1205 -- # local i=0 00:08:10.965 17:12:19 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:10.965 17:12:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.965 17:12:19 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:10.965 17:12:19 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.965 17:12:19 -- common/autotest_common.sh@1217 -- # return 0 00:08:10.965 17:12:19 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:10.965 17:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.965 17:12:19 -- common/autotest_common.sh@10 -- # set +x 00:08:10.965 17:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.965 17:12:19 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:10.965 17:12:19 -- common/autotest_common.sh@638 -- # local es=0 00:08:10.965 17:12:19 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:10.965 17:12:19 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:10.965 17:12:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:10.965 17:12:19 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:10.965 17:12:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:10.965 17:12:20 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:10.965 17:12:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:10.965 17:12:20 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:10.965 17:12:20 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:10.966 17:12:20 -- common/autotest_common.sh@641 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:10.966 [2024-04-24 17:12:20.032791] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:08:10.966 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:10.966 could not add new controller: failed to write to nvme-fabrics device 00:08:10.966 17:12:20 -- common/autotest_common.sh@641 -- # es=1 00:08:10.966 17:12:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:10.966 17:12:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:10.966 17:12:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:10.966 17:12:20 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:10.966 17:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.966 17:12:20 -- common/autotest_common.sh@10 -- # set +x 00:08:10.966 17:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.966 17:12:20 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:11.900 17:12:21 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:11.900 17:12:21 -- common/autotest_common.sh@1184 -- # local i=0 00:08:11.900 17:12:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:11.900 17:12:21 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:11.900 17:12:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:13.804 17:12:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:13.804 17:12:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:13.804 17:12:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:14.062 17:12:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:14.062 17:12:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:14.062 17:12:23 -- common/autotest_common.sh@1194 -- # return 0 00:08:14.062 17:12:23 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:15.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.000 17:12:23 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:15.000 17:12:23 -- common/autotest_common.sh@1205 -- # local i=0 00:08:15.000 17:12:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:15.000 17:12:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.000 17:12:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:15.000 17:12:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.000 17:12:24 -- common/autotest_common.sh@1217 -- # return 0 00:08:15.000 17:12:24 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.000 17:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:15.000 17:12:24 -- common/autotest_common.sh@10 -- # set +x 00:08:15.000 17:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:15.000 17:12:24 -- target/rpc.sh@81 -- # seq 1 5 00:08:15.000 17:12:24 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:15.000 17:12:24 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:15.000 17:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:15.000 17:12:24 -- common/autotest_common.sh@10 -- # set +x 00:08:15.000 17:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:15.000 17:12:24 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:15.000 17:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:15.000 17:12:24 -- common/autotest_common.sh@10 -- # set +x 00:08:15.000 [2024-04-24 17:12:24.040867] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:15.000 17:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:15.000 17:12:24 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:15.000 17:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:15.000 17:12:24 -- common/autotest_common.sh@10 -- # set +x 00:08:15.000 17:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:15.000 17:12:24 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:15.000 17:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:15.000 17:12:24 -- common/autotest_common.sh@10 -- # set +x 00:08:15.000 17:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:15.000 17:12:24 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:15.937 17:12:25 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:15.937 17:12:25 -- common/autotest_common.sh@1184 -- # local i=0 00:08:15.937 17:12:25 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:15.937 17:12:25 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:15.937 17:12:25 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:17.841 17:12:27 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:17.841 17:12:27 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:17.841 17:12:27 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:17.841 17:12:27 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:17.841 17:12:27 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:17.841 17:12:27 -- common/autotest_common.sh@1194 -- # return 0 00:08:17.841 17:12:27 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:18.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.779 17:12:28 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:18.779 17:12:28 -- common/autotest_common.sh@1205 -- # local i=0 00:08:18.779 17:12:28 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:18.779 17:12:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.779 17:12:28 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:18.779 17:12:28 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.038 17:12:28 -- common/autotest_common.sh@1217 -- # return 0 00:08:19.038 17:12:28 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.038 17:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.038 17:12:28 -- common/autotest_common.sh@10 -- # set +x 00:08:19.038 17:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.038 17:12:28 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.038 17:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.038 17:12:28 -- common/autotest_common.sh@10 -- # set +x 00:08:19.038 17:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.038 17:12:28 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:19.038 17:12:28 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:19.038 17:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.038 17:12:28 -- common/autotest_common.sh@10 -- # set +x 00:08:19.038 17:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.038 17:12:28 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:19.038 17:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.038 17:12:28 -- common/autotest_common.sh@10 -- # set +x 00:08:19.038 [2024-04-24 17:12:28.062334] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:19.038 17:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.038 17:12:28 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:19.038 17:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.038 17:12:28 -- common/autotest_common.sh@10 -- # set +x 00:08:19.038 17:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.038 17:12:28 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:19.038 17:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:19.038 17:12:28 -- common/autotest_common.sh@10 -- # set +x 00:08:19.038 17:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:19.038 17:12:28 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:19.974 17:12:29 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:19.974 17:12:29 -- common/autotest_common.sh@1184 -- # local i=0 00:08:19.974 17:12:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:19.974 17:12:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:19.974 17:12:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:21.878 17:12:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:21.878 17:12:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:21.878 17:12:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:21.878 17:12:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:21.878 17:12:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:21.878 17:12:31 -- common/autotest_common.sh@1194 -- # return 0 00:08:21.878 17:12:31 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:22.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.814 17:12:32 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:22.814 17:12:32 -- common/autotest_common.sh@1205 -- # local i=0 00:08:22.814 17:12:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:22.814 17:12:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.814 17:12:32 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:22.815 17:12:32 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.815 17:12:32 -- common/autotest_common.sh@1217 -- # return 0 00:08:22.815 17:12:32 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.815 17:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.815 17:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:22.815 17:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.815 17:12:32 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.815 17:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.815 17:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:22.815 17:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.815 17:12:32 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:22.815 17:12:32 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:22.815 17:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.815 17:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:22.815 17:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.815 17:12:32 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:22.815 17:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.815 17:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:22.815 [2024-04-24 17:12:32.060542] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:23.073 17:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.073 17:12:32 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:23.073 17:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.073 17:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:23.073 17:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.073 17:12:32 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:23.073 17:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.073 17:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:23.073 17:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.073 17:12:32 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:24.010 17:12:33 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:24.010 17:12:33 -- common/autotest_common.sh@1184 -- # local i=0 00:08:24.010 17:12:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:24.010 17:12:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:24.010 17:12:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:25.940 17:12:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:25.940 17:12:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:25.940 17:12:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:25.940 17:12:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:25.940 17:12:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:25.940 17:12:35 -- common/autotest_common.sh@1194 -- # return 0 00:08:25.940 17:12:35 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:26.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.874 17:12:36 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:26.874 17:12:36 -- common/autotest_common.sh@1205 -- # local i=0 00:08:26.874 17:12:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:26.874 17:12:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:26.874 17:12:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:26.874 17:12:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:26.874 17:12:36 -- common/autotest_common.sh@1217 -- # return 0 00:08:26.874 17:12:36 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:26.874 17:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.874 17:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:26.874 17:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.874 17:12:36 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.874 17:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.874 17:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:26.874 17:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.874 17:12:36 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:26.874 17:12:36 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:26.874 17:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.874 17:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:26.874 17:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.874 17:12:36 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:26.874 17:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.874 17:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:26.874 [2024-04-24 17:12:36.069374] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:26.874 17:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.874 17:12:36 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:26.874 17:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.874 17:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:26.874 17:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.874 17:12:36 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:26.874 17:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.874 17:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:26.874 17:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.874 17:12:36 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:27.813 17:12:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:27.813 17:12:37 -- common/autotest_common.sh@1184 -- # local i=0 00:08:27.813 17:12:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:27.813 17:12:37 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:27.813 17:12:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:30.350 17:12:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:30.350 17:12:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:30.350 17:12:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:30.350 17:12:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:30.350 17:12:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:30.350 17:12:39 -- common/autotest_common.sh@1194 -- # return 0 00:08:30.350 17:12:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:30.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.917 17:12:39 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:30.917 17:12:39 -- common/autotest_common.sh@1205 -- # local i=0 00:08:30.917 17:12:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:30.917 17:12:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.917 17:12:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.917 17:12:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:30.917 17:12:40 -- common/autotest_common.sh@1217 -- # return 0 00:08:30.917 17:12:40 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.917 17:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.917 17:12:40 -- common/autotest_common.sh@10 -- # set +x 00:08:30.917 17:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.917 17:12:40 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.917 17:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.917 17:12:40 -- common/autotest_common.sh@10 -- # set +x 00:08:30.917 17:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.917 17:12:40 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:30.917 17:12:40 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.917 17:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.917 17:12:40 -- common/autotest_common.sh@10 -- # set +x 00:08:30.917 17:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.917 17:12:40 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:30.917 17:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.917 17:12:40 -- common/autotest_common.sh@10 -- # set +x 00:08:30.917 [2024-04-24 17:12:40.052163] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:30.917 17:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.917 17:12:40 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:30.917 17:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.917 17:12:40 -- common/autotest_common.sh@10 -- # set +x 00:08:30.917 17:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.917 17:12:40 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:30.917 17:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.917 17:12:40 -- common/autotest_common.sh@10 -- # set +x 00:08:30.917 17:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.917 17:12:40 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:31.857 17:12:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:31.857 17:12:41 -- common/autotest_common.sh@1184 -- # local i=0 00:08:31.857 17:12:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:31.857 17:12:41 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:31.858 17:12:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:34.393 17:12:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:34.393 17:12:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:34.393 17:12:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.393 17:12:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:34.393 17:12:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.393 17:12:43 -- common/autotest_common.sh@1194 -- # return 0 00:08:34.394 17:12:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.962 17:12:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.962 17:12:43 -- common/autotest_common.sh@1205 -- # local i=0 00:08:34.962 17:12:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:34.962 17:12:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.962 17:12:44 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:34.962 17:12:44 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.962 17:12:44 -- common/autotest_common.sh@1217 -- # return 0 00:08:34.962 17:12:44 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.962 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.962 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.962 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.962 17:12:44 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.962 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.962 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.962 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.962 17:12:44 -- target/rpc.sh@99 -- # seq 1 5 00:08:34.962 17:12:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:34.962 17:12:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:34.962 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.962 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.962 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.962 17:12:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:34.962 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.962 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.962 [2024-04-24 17:12:44.064344] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:34.962 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.962 17:12:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:34.962 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.962 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.962 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.962 17:12:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:34.962 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.962 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.962 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.962 17:12:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.962 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.962 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.962 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.962 17:12:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.962 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.962 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.962 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.962 17:12:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:34.962 17:12:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:34.962 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.962 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.962 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.962 17:12:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:34.962 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.962 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.962 [2024-04-24 17:12:44.116523] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:34.962 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.963 17:12:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:34.963 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.963 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.963 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.963 17:12:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:34.963 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.963 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.963 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.963 17:12:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.963 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.963 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.963 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.963 17:12:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.963 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.963 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.963 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.963 17:12:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:34.963 17:12:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:34.963 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.963 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.963 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.963 17:12:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:34.963 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.963 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.963 [2024-04-24 17:12:44.164710] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:34.963 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.963 17:12:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:34.963 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.963 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.963 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.963 17:12:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:34.963 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.963 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.963 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.963 17:12:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.963 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.963 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.963 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.963 17:12:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.963 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.963 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:34.963 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.963 17:12:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:34.963 17:12:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:34.963 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.963 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.222 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.222 17:12:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:35.222 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.222 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.222 [2024-04-24 17:12:44.216916] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:35.222 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.222 17:12:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:35.222 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.222 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.222 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.222 17:12:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:35.222 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.222 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.222 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.222 17:12:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.222 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.222 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.222 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.222 17:12:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.222 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.222 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.222 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.222 17:12:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:35.222 17:12:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:35.222 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.222 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.222 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.222 17:12:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:35.222 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.222 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.222 [2024-04-24 17:12:44.265101] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:35.222 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.222 17:12:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:35.222 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.222 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.222 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.222 17:12:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:35.222 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.222 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.222 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.222 17:12:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.222 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.222 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.222 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.222 17:12:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.222 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.222 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.222 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.222 17:12:44 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:35.222 17:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.222 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.222 17:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.222 17:12:44 -- target/rpc.sh@110 -- # stats='{ 00:08:35.222 "tick_rate": 2100000000, 00:08:35.222 "poll_groups": [ 00:08:35.222 { 00:08:35.222 "name": "nvmf_tgt_poll_group_0", 00:08:35.222 "admin_qpairs": 2, 00:08:35.222 "io_qpairs": 27, 00:08:35.222 "current_admin_qpairs": 0, 00:08:35.222 "current_io_qpairs": 0, 00:08:35.222 "pending_bdev_io": 0, 00:08:35.222 "completed_nvme_io": 78, 00:08:35.222 "transports": [ 00:08:35.222 { 00:08:35.222 "trtype": "RDMA", 00:08:35.222 "pending_data_buffer": 0, 00:08:35.222 "devices": [ 00:08:35.222 { 00:08:35.222 "name": "mlx5_0", 00:08:35.222 "polls": 3562805, 00:08:35.222 "idle_polls": 3562568, 00:08:35.222 "completions": 255, 00:08:35.222 "requests": 127, 00:08:35.222 "request_latency": 18444360, 00:08:35.222 "pending_free_request": 0, 00:08:35.222 "pending_rdma_read": 0, 00:08:35.222 "pending_rdma_write": 0, 00:08:35.222 "pending_rdma_send": 0, 00:08:35.222 "total_send_wrs": 199, 00:08:35.222 "send_doorbell_updates": 116, 00:08:35.222 "total_recv_wrs": 4223, 00:08:35.223 "recv_doorbell_updates": 116 00:08:35.223 }, 00:08:35.223 { 00:08:35.223 "name": "mlx5_1", 00:08:35.223 "polls": 3562805, 00:08:35.223 "idle_polls": 3562805, 00:08:35.223 "completions": 0, 00:08:35.223 "requests": 0, 00:08:35.223 "request_latency": 0, 00:08:35.223 "pending_free_request": 0, 00:08:35.223 "pending_rdma_read": 0, 00:08:35.223 "pending_rdma_write": 0, 00:08:35.223 "pending_rdma_send": 0, 00:08:35.223 "total_send_wrs": 0, 00:08:35.223 "send_doorbell_updates": 0, 00:08:35.223 "total_recv_wrs": 4096, 00:08:35.223 "recv_doorbell_updates": 1 00:08:35.223 } 00:08:35.223 ] 00:08:35.223 } 00:08:35.223 ] 00:08:35.223 }, 00:08:35.223 { 00:08:35.223 "name": "nvmf_tgt_poll_group_1", 00:08:35.223 "admin_qpairs": 2, 00:08:35.223 "io_qpairs": 26, 00:08:35.223 "current_admin_qpairs": 0, 00:08:35.223 "current_io_qpairs": 0, 00:08:35.223 "pending_bdev_io": 0, 00:08:35.223 "completed_nvme_io": 174, 00:08:35.223 "transports": [ 00:08:35.223 { 00:08:35.223 "trtype": "RDMA", 00:08:35.223 "pending_data_buffer": 0, 00:08:35.223 "devices": [ 00:08:35.223 { 00:08:35.223 "name": "mlx5_0", 00:08:35.223 "polls": 3599529, 00:08:35.223 "idle_polls": 3599142, 00:08:35.223 "completions": 444, 00:08:35.223 "requests": 222, 00:08:35.223 "request_latency": 42087840, 00:08:35.223 "pending_free_request": 0, 00:08:35.223 "pending_rdma_read": 0, 00:08:35.223 "pending_rdma_write": 0, 00:08:35.223 "pending_rdma_send": 0, 00:08:35.223 "total_send_wrs": 390, 00:08:35.223 "send_doorbell_updates": 185, 00:08:35.223 "total_recv_wrs": 4318, 00:08:35.223 "recv_doorbell_updates": 186 00:08:35.223 }, 00:08:35.223 { 00:08:35.223 "name": "mlx5_1", 00:08:35.223 "polls": 3599529, 00:08:35.223 "idle_polls": 3599529, 00:08:35.223 "completions": 0, 00:08:35.223 "requests": 0, 00:08:35.223 "request_latency": 0, 00:08:35.223 "pending_free_request": 0, 00:08:35.223 "pending_rdma_read": 0, 00:08:35.223 "pending_rdma_write": 0, 00:08:35.223 "pending_rdma_send": 0, 00:08:35.223 "total_send_wrs": 0, 00:08:35.223 "send_doorbell_updates": 0, 00:08:35.223 "total_recv_wrs": 4096, 00:08:35.223 "recv_doorbell_updates": 1 00:08:35.223 } 00:08:35.223 ] 00:08:35.223 } 00:08:35.223 ] 00:08:35.223 }, 00:08:35.223 { 00:08:35.223 "name": "nvmf_tgt_poll_group_2", 00:08:35.223 "admin_qpairs": 1, 00:08:35.223 "io_qpairs": 26, 00:08:35.223 "current_admin_qpairs": 0, 00:08:35.223 "current_io_qpairs": 0, 00:08:35.223 "pending_bdev_io": 0, 00:08:35.223 "completed_nvme_io": 126, 00:08:35.223 "transports": [ 00:08:35.223 { 00:08:35.223 "trtype": "RDMA", 00:08:35.223 "pending_data_buffer": 0, 00:08:35.223 "devices": [ 00:08:35.223 { 00:08:35.223 "name": "mlx5_0", 00:08:35.223 "polls": 3570305, 00:08:35.223 "idle_polls": 3570042, 00:08:35.223 "completions": 303, 00:08:35.223 "requests": 151, 00:08:35.223 "request_latency": 29661902, 00:08:35.223 "pending_free_request": 0, 00:08:35.223 "pending_rdma_read": 0, 00:08:35.223 "pending_rdma_write": 0, 00:08:35.223 "pending_rdma_send": 0, 00:08:35.223 "total_send_wrs": 262, 00:08:35.223 "send_doorbell_updates": 126, 00:08:35.223 "total_recv_wrs": 4247, 00:08:35.223 "recv_doorbell_updates": 126 00:08:35.223 }, 00:08:35.223 { 00:08:35.223 "name": "mlx5_1", 00:08:35.223 "polls": 3570305, 00:08:35.223 "idle_polls": 3570305, 00:08:35.223 "completions": 0, 00:08:35.223 "requests": 0, 00:08:35.223 "request_latency": 0, 00:08:35.223 "pending_free_request": 0, 00:08:35.223 "pending_rdma_read": 0, 00:08:35.223 "pending_rdma_write": 0, 00:08:35.223 "pending_rdma_send": 0, 00:08:35.223 "total_send_wrs": 0, 00:08:35.223 "send_doorbell_updates": 0, 00:08:35.223 "total_recv_wrs": 4096, 00:08:35.223 "recv_doorbell_updates": 1 00:08:35.223 } 00:08:35.223 ] 00:08:35.223 } 00:08:35.223 ] 00:08:35.223 }, 00:08:35.223 { 00:08:35.223 "name": "nvmf_tgt_poll_group_3", 00:08:35.223 "admin_qpairs": 2, 00:08:35.223 "io_qpairs": 26, 00:08:35.223 "current_admin_qpairs": 0, 00:08:35.223 "current_io_qpairs": 0, 00:08:35.223 "pending_bdev_io": 0, 00:08:35.223 "completed_nvme_io": 77, 00:08:35.223 "transports": [ 00:08:35.223 { 00:08:35.223 "trtype": "RDMA", 00:08:35.223 "pending_data_buffer": 0, 00:08:35.223 "devices": [ 00:08:35.223 { 00:08:35.223 "name": "mlx5_0", 00:08:35.223 "polls": 2836078, 00:08:35.223 "idle_polls": 2835849, 00:08:35.223 "completions": 250, 00:08:35.223 "requests": 125, 00:08:35.223 "request_latency": 18823768, 00:08:35.223 "pending_free_request": 0, 00:08:35.223 "pending_rdma_read": 0, 00:08:35.223 "pending_rdma_write": 0, 00:08:35.223 "pending_rdma_send": 0, 00:08:35.223 "total_send_wrs": 196, 00:08:35.223 "send_doorbell_updates": 112, 00:08:35.223 "total_recv_wrs": 4221, 00:08:35.223 "recv_doorbell_updates": 113 00:08:35.223 }, 00:08:35.223 { 00:08:35.223 "name": "mlx5_1", 00:08:35.223 "polls": 2836078, 00:08:35.223 "idle_polls": 2836078, 00:08:35.223 "completions": 0, 00:08:35.223 "requests": 0, 00:08:35.223 "request_latency": 0, 00:08:35.223 "pending_free_request": 0, 00:08:35.223 "pending_rdma_read": 0, 00:08:35.223 "pending_rdma_write": 0, 00:08:35.223 "pending_rdma_send": 0, 00:08:35.223 "total_send_wrs": 0, 00:08:35.223 "send_doorbell_updates": 0, 00:08:35.223 "total_recv_wrs": 4096, 00:08:35.223 "recv_doorbell_updates": 1 00:08:35.223 } 00:08:35.223 ] 00:08:35.223 } 00:08:35.223 ] 00:08:35.223 } 00:08:35.223 ] 00:08:35.223 }' 00:08:35.223 17:12:44 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:35.223 17:12:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:35.223 17:12:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:35.223 17:12:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:35.223 17:12:44 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:35.223 17:12:44 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:35.223 17:12:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:35.223 17:12:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:35.223 17:12:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:35.223 17:12:44 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:08:35.223 17:12:44 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:08:35.223 17:12:44 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:08:35.223 17:12:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:08:35.223 17:12:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:08:35.223 17:12:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:35.483 17:12:44 -- target/rpc.sh@117 -- # (( 1252 > 0 )) 00:08:35.483 17:12:44 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:08:35.483 17:12:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:08:35.483 17:12:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:08:35.483 17:12:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:35.483 17:12:44 -- target/rpc.sh@118 -- # (( 109017870 > 0 )) 00:08:35.483 17:12:44 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:35.483 17:12:44 -- target/rpc.sh@123 -- # nvmftestfini 00:08:35.483 17:12:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:35.483 17:12:44 -- nvmf/common.sh@117 -- # sync 00:08:35.483 17:12:44 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:35.483 17:12:44 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:35.483 17:12:44 -- nvmf/common.sh@120 -- # set +e 00:08:35.483 17:12:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.483 17:12:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:35.483 rmmod nvme_rdma 00:08:35.483 rmmod nvme_fabrics 00:08:35.483 17:12:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.483 17:12:44 -- nvmf/common.sh@124 -- # set -e 00:08:35.483 17:12:44 -- nvmf/common.sh@125 -- # return 0 00:08:35.483 17:12:44 -- nvmf/common.sh@478 -- # '[' -n 2970362 ']' 00:08:35.483 17:12:44 -- nvmf/common.sh@479 -- # killprocess 2970362 00:08:35.483 17:12:44 -- common/autotest_common.sh@936 -- # '[' -z 2970362 ']' 00:08:35.483 17:12:44 -- common/autotest_common.sh@940 -- # kill -0 2970362 00:08:35.483 17:12:44 -- common/autotest_common.sh@941 -- # uname 00:08:35.483 17:12:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:35.483 17:12:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2970362 00:08:35.483 17:12:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:35.483 17:12:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:35.483 17:12:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2970362' 00:08:35.483 killing process with pid 2970362 00:08:35.483 17:12:44 -- common/autotest_common.sh@955 -- # kill 2970362 00:08:35.483 17:12:44 -- common/autotest_common.sh@960 -- # wait 2970362 00:08:35.743 17:12:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:35.743 17:12:44 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:35.743 00:08:35.743 real 0m35.128s 00:08:35.743 user 2m1.786s 00:08:35.743 sys 0m4.822s 00:08:35.743 17:12:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:35.743 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.743 ************************************ 00:08:35.743 END TEST nvmf_rpc 00:08:35.743 ************************************ 00:08:35.743 17:12:44 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:08:35.743 17:12:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:35.743 17:12:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.743 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:36.003 ************************************ 00:08:36.003 START TEST nvmf_invalid 00:08:36.003 ************************************ 00:08:36.003 17:12:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:08:36.003 * Looking for test storage... 00:08:36.003 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:36.003 17:12:45 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.003 17:12:45 -- nvmf/common.sh@7 -- # uname -s 00:08:36.003 17:12:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.003 17:12:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.003 17:12:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.003 17:12:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.003 17:12:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.003 17:12:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.003 17:12:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.003 17:12:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.003 17:12:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.003 17:12:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.003 17:12:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:36.003 17:12:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:36.003 17:12:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.003 17:12:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.003 17:12:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.003 17:12:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.003 17:12:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:36.003 17:12:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.003 17:12:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.003 17:12:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.003 17:12:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.003 17:12:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.003 17:12:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.003 17:12:45 -- paths/export.sh@5 -- # export PATH 00:08:36.003 17:12:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.003 17:12:45 -- nvmf/common.sh@47 -- # : 0 00:08:36.003 17:12:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.003 17:12:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.003 17:12:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.003 17:12:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.003 17:12:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.003 17:12:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.003 17:12:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.003 17:12:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.003 17:12:45 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:36.003 17:12:45 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:36.003 17:12:45 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:36.003 17:12:45 -- target/invalid.sh@14 -- # target=foobar 00:08:36.003 17:12:45 -- target/invalid.sh@16 -- # RANDOM=0 00:08:36.003 17:12:45 -- target/invalid.sh@34 -- # nvmftestinit 00:08:36.003 17:12:45 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:36.003 17:12:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.003 17:12:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:36.003 17:12:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:36.003 17:12:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:36.003 17:12:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.003 17:12:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.003 17:12:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.003 17:12:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:36.003 17:12:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:36.003 17:12:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:36.003 17:12:45 -- common/autotest_common.sh@10 -- # set +x 00:08:41.359 17:12:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:41.359 17:12:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:41.359 17:12:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:41.359 17:12:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:41.359 17:12:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:41.359 17:12:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:41.359 17:12:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:41.359 17:12:50 -- nvmf/common.sh@295 -- # net_devs=() 00:08:41.359 17:12:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:41.359 17:12:50 -- nvmf/common.sh@296 -- # e810=() 00:08:41.359 17:12:50 -- nvmf/common.sh@296 -- # local -ga e810 00:08:41.359 17:12:50 -- nvmf/common.sh@297 -- # x722=() 00:08:41.359 17:12:50 -- nvmf/common.sh@297 -- # local -ga x722 00:08:41.359 17:12:50 -- nvmf/common.sh@298 -- # mlx=() 00:08:41.359 17:12:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:41.359 17:12:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.359 17:12:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.359 17:12:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.359 17:12:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.359 17:12:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.359 17:12:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.359 17:12:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.359 17:12:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.359 17:12:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.359 17:12:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.359 17:12:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.359 17:12:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:41.359 17:12:50 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:41.359 17:12:50 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:41.359 17:12:50 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:41.359 17:12:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:41.359 17:12:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.359 17:12:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:41.359 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:41.359 17:12:50 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:41.359 17:12:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.359 17:12:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:41.359 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:41.359 17:12:50 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:41.359 17:12:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:41.359 17:12:50 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.359 17:12:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.359 17:12:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:41.359 17:12:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.359 17:12:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:41.359 Found net devices under 0000:da:00.0: mlx_0_0 00:08:41.359 17:12:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.359 17:12:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.359 17:12:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.359 17:12:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:41.359 17:12:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.359 17:12:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:41.359 Found net devices under 0000:da:00.1: mlx_0_1 00:08:41.359 17:12:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.359 17:12:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:41.359 17:12:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:41.359 17:12:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:41.359 17:12:50 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:41.359 17:12:50 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:41.359 17:12:50 -- nvmf/common.sh@58 -- # uname 00:08:41.359 17:12:50 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:41.359 17:12:50 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:41.359 17:12:50 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:41.359 17:12:50 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:41.359 17:12:50 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:41.359 17:12:50 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:41.359 17:12:50 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:41.359 17:12:50 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:41.359 17:12:50 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:41.359 17:12:50 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:41.359 17:12:50 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:41.359 17:12:50 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:41.359 17:12:50 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:41.359 17:12:50 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:41.359 17:12:50 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:41.359 17:12:50 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:41.359 17:12:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:41.359 17:12:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.359 17:12:50 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:41.360 17:12:50 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:41.360 17:12:50 -- nvmf/common.sh@105 -- # continue 2 00:08:41.360 17:12:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:41.360 17:12:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.360 17:12:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:41.360 17:12:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.360 17:12:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:41.360 17:12:50 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:41.360 17:12:50 -- nvmf/common.sh@105 -- # continue 2 00:08:41.360 17:12:50 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:41.360 17:12:50 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:41.360 17:12:50 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:41.360 17:12:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:41.360 17:12:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:41.360 17:12:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:41.360 17:12:50 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:41.360 17:12:50 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:41.360 17:12:50 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:41.360 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:41.360 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:41.360 altname enp218s0f0np0 00:08:41.360 altname ens818f0np0 00:08:41.360 inet 192.168.100.8/24 scope global mlx_0_0 00:08:41.360 valid_lft forever preferred_lft forever 00:08:41.360 17:12:50 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:41.360 17:12:50 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:41.360 17:12:50 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:41.360 17:12:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:41.360 17:12:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:41.360 17:12:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:41.360 17:12:50 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:41.360 17:12:50 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:41.360 17:12:50 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:41.360 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:41.360 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:41.360 altname enp218s0f1np1 00:08:41.360 altname ens818f1np1 00:08:41.360 inet 192.168.100.9/24 scope global mlx_0_1 00:08:41.360 valid_lft forever preferred_lft forever 00:08:41.360 17:12:50 -- nvmf/common.sh@411 -- # return 0 00:08:41.360 17:12:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:41.360 17:12:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:41.360 17:12:50 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:41.360 17:12:50 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:41.360 17:12:50 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:41.360 17:12:50 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:41.360 17:12:50 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:41.360 17:12:50 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:41.360 17:12:50 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:41.360 17:12:50 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:41.360 17:12:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:41.360 17:12:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.360 17:12:50 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:41.360 17:12:50 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:41.360 17:12:50 -- nvmf/common.sh@105 -- # continue 2 00:08:41.360 17:12:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:41.360 17:12:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.360 17:12:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:41.360 17:12:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.360 17:12:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:41.360 17:12:50 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:41.360 17:12:50 -- nvmf/common.sh@105 -- # continue 2 00:08:41.360 17:12:50 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:41.360 17:12:50 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:41.360 17:12:50 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:41.360 17:12:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:41.360 17:12:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:41.360 17:12:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:41.360 17:12:50 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:41.360 17:12:50 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:41.360 17:12:50 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:41.360 17:12:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:41.360 17:12:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:41.360 17:12:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:41.360 17:12:50 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:41.360 192.168.100.9' 00:08:41.360 17:12:50 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:41.360 192.168.100.9' 00:08:41.360 17:12:50 -- nvmf/common.sh@446 -- # head -n 1 00:08:41.360 17:12:50 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:41.360 17:12:50 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:41.360 192.168.100.9' 00:08:41.360 17:12:50 -- nvmf/common.sh@447 -- # tail -n +2 00:08:41.360 17:12:50 -- nvmf/common.sh@447 -- # head -n 1 00:08:41.360 17:12:50 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:41.360 17:12:50 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:41.360 17:12:50 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:41.360 17:12:50 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:41.360 17:12:50 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:41.360 17:12:50 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:41.360 17:12:50 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:41.360 17:12:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:41.360 17:12:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:41.360 17:12:50 -- common/autotest_common.sh@10 -- # set +x 00:08:41.360 17:12:50 -- nvmf/common.sh@470 -- # nvmfpid=2973136 00:08:41.360 17:12:50 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:41.360 17:12:50 -- nvmf/common.sh@471 -- # waitforlisten 2973136 00:08:41.360 17:12:50 -- common/autotest_common.sh@817 -- # '[' -z 2973136 ']' 00:08:41.360 17:12:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.360 17:12:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:41.360 17:12:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.360 17:12:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:41.360 17:12:50 -- common/autotest_common.sh@10 -- # set +x 00:08:41.360 [2024-04-24 17:12:50.502540] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:41.360 [2024-04-24 17:12:50.502586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.360 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.360 [2024-04-24 17:12:50.558111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.619 [2024-04-24 17:12:50.632662] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.619 [2024-04-24 17:12:50.632701] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.619 [2024-04-24 17:12:50.632709] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.620 [2024-04-24 17:12:50.632715] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.620 [2024-04-24 17:12:50.632720] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.620 [2024-04-24 17:12:50.632762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.620 [2024-04-24 17:12:50.632859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.620 [2024-04-24 17:12:50.632909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.620 [2024-04-24 17:12:50.632910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.187 17:12:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:42.187 17:12:51 -- common/autotest_common.sh@850 -- # return 0 00:08:42.187 17:12:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:42.187 17:12:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:42.187 17:12:51 -- common/autotest_common.sh@10 -- # set +x 00:08:42.187 17:12:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.187 17:12:51 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:42.187 17:12:51 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12325 00:08:42.445 [2024-04-24 17:12:51.489019] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:42.445 17:12:51 -- target/invalid.sh@40 -- # out='request: 00:08:42.445 { 00:08:42.445 "nqn": "nqn.2016-06.io.spdk:cnode12325", 00:08:42.445 "tgt_name": "foobar", 00:08:42.445 "method": "nvmf_create_subsystem", 00:08:42.445 "req_id": 1 00:08:42.445 } 00:08:42.445 Got JSON-RPC error response 00:08:42.445 response: 00:08:42.445 { 00:08:42.445 "code": -32603, 00:08:42.445 "message": "Unable to find target foobar" 00:08:42.445 }' 00:08:42.445 17:12:51 -- target/invalid.sh@41 -- # [[ request: 00:08:42.445 { 00:08:42.445 "nqn": "nqn.2016-06.io.spdk:cnode12325", 00:08:42.445 "tgt_name": "foobar", 00:08:42.445 "method": "nvmf_create_subsystem", 00:08:42.445 "req_id": 1 00:08:42.445 } 00:08:42.445 Got JSON-RPC error response 00:08:42.445 response: 00:08:42.445 { 00:08:42.445 "code": -32603, 00:08:42.445 "message": "Unable to find target foobar" 00:08:42.445 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:42.445 17:12:51 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:42.445 17:12:51 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6134 00:08:42.445 [2024-04-24 17:12:51.673660] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6134: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:42.704 17:12:51 -- target/invalid.sh@45 -- # out='request: 00:08:42.704 { 00:08:42.704 "nqn": "nqn.2016-06.io.spdk:cnode6134", 00:08:42.704 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:42.704 "method": "nvmf_create_subsystem", 00:08:42.704 "req_id": 1 00:08:42.704 } 00:08:42.704 Got JSON-RPC error response 00:08:42.704 response: 00:08:42.704 { 00:08:42.704 "code": -32602, 00:08:42.704 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:42.704 }' 00:08:42.704 17:12:51 -- target/invalid.sh@46 -- # [[ request: 00:08:42.704 { 00:08:42.704 "nqn": "nqn.2016-06.io.spdk:cnode6134", 00:08:42.704 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:42.704 "method": "nvmf_create_subsystem", 00:08:42.704 "req_id": 1 00:08:42.704 } 00:08:42.704 Got JSON-RPC error response 00:08:42.704 response: 00:08:42.704 { 00:08:42.704 "code": -32602, 00:08:42.704 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:42.704 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:42.704 17:12:51 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:42.704 17:12:51 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5942 00:08:42.704 [2024-04-24 17:12:51.854231] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5942: invalid model number 'SPDK_Controller' 00:08:42.704 17:12:51 -- target/invalid.sh@50 -- # out='request: 00:08:42.704 { 00:08:42.704 "nqn": "nqn.2016-06.io.spdk:cnode5942", 00:08:42.704 "model_number": "SPDK_Controller\u001f", 00:08:42.704 "method": "nvmf_create_subsystem", 00:08:42.704 "req_id": 1 00:08:42.704 } 00:08:42.704 Got JSON-RPC error response 00:08:42.704 response: 00:08:42.704 { 00:08:42.704 "code": -32602, 00:08:42.704 "message": "Invalid MN SPDK_Controller\u001f" 00:08:42.704 }' 00:08:42.704 17:12:51 -- target/invalid.sh@51 -- # [[ request: 00:08:42.704 { 00:08:42.704 "nqn": "nqn.2016-06.io.spdk:cnode5942", 00:08:42.704 "model_number": "SPDK_Controller\u001f", 00:08:42.704 "method": "nvmf_create_subsystem", 00:08:42.704 "req_id": 1 00:08:42.704 } 00:08:42.704 Got JSON-RPC error response 00:08:42.704 response: 00:08:42.704 { 00:08:42.704 "code": -32602, 00:08:42.704 "message": "Invalid MN SPDK_Controller\u001f" 00:08:42.704 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:42.704 17:12:51 -- target/invalid.sh@54 -- # gen_random_s 21 00:08:42.704 17:12:51 -- target/invalid.sh@19 -- # local length=21 ll 00:08:42.704 17:12:51 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:42.704 17:12:51 -- target/invalid.sh@21 -- # local chars 00:08:42.704 17:12:51 -- target/invalid.sh@22 -- # local string 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # printf %x 47 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # string+=/ 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # printf %x 91 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # string+='[' 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # printf %x 41 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # string+=')' 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # printf %x 33 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # string+='!' 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # printf %x 64 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # string+=@ 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # printf %x 67 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # string+=C 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # printf %x 103 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # string+=g 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # printf %x 90 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # string+=Z 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # printf %x 91 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # string+='[' 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.704 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # printf %x 83 00:08:42.704 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:42.962 17:12:51 -- target/invalid.sh@25 -- # string+=S 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # printf %x 95 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # string+=_ 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # printf %x 89 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # string+=Y 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # printf %x 108 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # string+=l 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # printf %x 94 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # string+='^' 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # printf %x 60 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # string+='<' 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # printf %x 64 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # string+=@ 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # printf %x 76 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # string+=L 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # printf %x 65 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # echo -e '\x41' 00:08:42.963 17:12:51 -- target/invalid.sh@25 -- # string+=A 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.963 17:12:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.963 17:12:52 -- target/invalid.sh@25 -- # printf %x 102 00:08:42.963 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:42.963 17:12:52 -- target/invalid.sh@25 -- # string+=f 00:08:42.963 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.963 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.963 17:12:52 -- target/invalid.sh@25 -- # printf %x 106 00:08:42.963 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:42.963 17:12:52 -- target/invalid.sh@25 -- # string+=j 00:08:42.963 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.963 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.963 17:12:52 -- target/invalid.sh@25 -- # printf %x 48 00:08:42.963 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:42.963 17:12:52 -- target/invalid.sh@25 -- # string+=0 00:08:42.963 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:42.963 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.963 17:12:52 -- target/invalid.sh@28 -- # [[ / == \- ]] 00:08:42.963 17:12:52 -- target/invalid.sh@31 -- # echo '/[)!@CgZ[S_Yl^<@LAfj0' 00:08:42.963 17:12:52 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '/[)!@CgZ[S_Yl^<@LAfj0' nqn.2016-06.io.spdk:cnode18745 00:08:42.963 [2024-04-24 17:12:52.171306] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18745: invalid serial number '/[)!@CgZ[S_Yl^<@LAfj0' 00:08:42.963 17:12:52 -- target/invalid.sh@54 -- # out='request: 00:08:42.963 { 00:08:42.963 "nqn": "nqn.2016-06.io.spdk:cnode18745", 00:08:42.963 "serial_number": "/[)!@CgZ[S_Yl^<@LAfj0", 00:08:42.963 "method": "nvmf_create_subsystem", 00:08:42.963 "req_id": 1 00:08:42.963 } 00:08:42.963 Got JSON-RPC error response 00:08:42.963 response: 00:08:42.963 { 00:08:42.963 "code": -32602, 00:08:42.963 "message": "Invalid SN /[)!@CgZ[S_Yl^<@LAfj0" 00:08:42.963 }' 00:08:42.963 17:12:52 -- target/invalid.sh@55 -- # [[ request: 00:08:42.963 { 00:08:42.963 "nqn": "nqn.2016-06.io.spdk:cnode18745", 00:08:42.963 "serial_number": "/[)!@CgZ[S_Yl^<@LAfj0", 00:08:42.963 "method": "nvmf_create_subsystem", 00:08:42.963 "req_id": 1 00:08:42.963 } 00:08:42.963 Got JSON-RPC error response 00:08:42.963 response: 00:08:42.963 { 00:08:42.963 "code": -32602, 00:08:42.963 "message": "Invalid SN /[)!@CgZ[S_Yl^<@LAfj0" 00:08:42.963 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:42.963 17:12:52 -- target/invalid.sh@58 -- # gen_random_s 41 00:08:42.963 17:12:52 -- target/invalid.sh@19 -- # local length=41 ll 00:08:42.963 17:12:52 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:42.963 17:12:52 -- target/invalid.sh@21 -- # local chars 00:08:42.963 17:12:52 -- target/invalid.sh@22 -- # local string 00:08:42.963 17:12:52 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:42.963 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:42.963 17:12:52 -- target/invalid.sh@25 -- # printf %x 50 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=2 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 42 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+='*' 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 85 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=U 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 107 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=k 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 116 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=t 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 89 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=Y 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 49 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=1 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 56 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x38' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=8 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 108 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=l 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 67 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=C 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 107 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=k 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 110 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=n 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 121 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x79' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=y 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 92 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+='\' 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 53 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=5 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 88 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x58' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=X 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 66 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=B 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 67 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=C 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 81 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+=Q 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 123 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+='{' 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 94 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # string+='^' 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.222 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.222 17:12:52 -- target/invalid.sh@25 -- # printf %x 92 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+='\' 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 93 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=']' 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 109 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=m 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 39 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=\' 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 119 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x77' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=w 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 58 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=: 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 122 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=z 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 104 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=h 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 64 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=@ 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 50 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=2 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 107 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=k 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 90 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=Z 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 69 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=E 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 41 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=')' 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 88 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x58' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=X 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 33 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+='!' 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 81 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=Q 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 123 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+='{' 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 66 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=B 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # printf %x 85 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:43.223 17:12:52 -- target/invalid.sh@25 -- # string+=U 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:43.223 17:12:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:43.223 17:12:52 -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:08:43.223 17:12:52 -- target/invalid.sh@31 -- # echo '2*UktY18lCkny\5XBCQ{^\]m'\''w:zh@2kZE)X!Q{BU' 00:08:43.223 17:12:52 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '2*UktY18lCkny\5XBCQ{^\]m'\''w:zh@2kZE)X!Q{BU' nqn.2016-06.io.spdk:cnode4233 00:08:43.482 [2024-04-24 17:12:52.608770] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4233: invalid model number '2*UktY18lCkny\5XBCQ{^\]m'w:zh@2kZE)X!Q{BU' 00:08:43.482 17:12:52 -- target/invalid.sh@58 -- # out='request: 00:08:43.482 { 00:08:43.482 "nqn": "nqn.2016-06.io.spdk:cnode4233", 00:08:43.482 "model_number": "2*UktY18lCkny\\5XBCQ{^\\]m'\''w:zh@2kZE)X!Q{BU", 00:08:43.482 "method": "nvmf_create_subsystem", 00:08:43.482 "req_id": 1 00:08:43.482 } 00:08:43.482 Got JSON-RPC error response 00:08:43.482 response: 00:08:43.482 { 00:08:43.482 "code": -32602, 00:08:43.482 "message": "Invalid MN 2*UktY18lCkny\\5XBCQ{^\\]m'\''w:zh@2kZE)X!Q{BU" 00:08:43.482 }' 00:08:43.482 17:12:52 -- target/invalid.sh@59 -- # [[ request: 00:08:43.482 { 00:08:43.482 "nqn": "nqn.2016-06.io.spdk:cnode4233", 00:08:43.482 "model_number": "2*UktY18lCkny\\5XBCQ{^\\]m'w:zh@2kZE)X!Q{BU", 00:08:43.482 "method": "nvmf_create_subsystem", 00:08:43.482 "req_id": 1 00:08:43.482 } 00:08:43.482 Got JSON-RPC error response 00:08:43.482 response: 00:08:43.482 { 00:08:43.482 "code": -32602, 00:08:43.482 "message": "Invalid MN 2*UktY18lCkny\\5XBCQ{^\\]m'w:zh@2kZE)X!Q{BU" 00:08:43.482 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:43.482 17:12:52 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:08:43.739 [2024-04-24 17:12:52.810250] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1da6620/0x1daab10) succeed. 00:08:43.739 [2024-04-24 17:12:52.820257] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1da7c10/0x1dec1a0) succeed. 00:08:43.739 17:12:52 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:43.997 17:12:53 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:08:43.997 17:12:53 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:08:43.997 192.168.100.9' 00:08:43.997 17:12:53 -- target/invalid.sh@67 -- # head -n 1 00:08:43.997 17:12:53 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:08:43.997 17:12:53 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:08:44.257 [2024-04-24 17:12:53.281511] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:44.257 17:12:53 -- target/invalid.sh@69 -- # out='request: 00:08:44.257 { 00:08:44.257 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:44.257 "listen_address": { 00:08:44.257 "trtype": "rdma", 00:08:44.257 "traddr": "192.168.100.8", 00:08:44.257 "trsvcid": "4421" 00:08:44.257 }, 00:08:44.257 "method": "nvmf_subsystem_remove_listener", 00:08:44.257 "req_id": 1 00:08:44.257 } 00:08:44.257 Got JSON-RPC error response 00:08:44.257 response: 00:08:44.257 { 00:08:44.257 "code": -32602, 00:08:44.257 "message": "Invalid parameters" 00:08:44.257 }' 00:08:44.257 17:12:53 -- target/invalid.sh@70 -- # [[ request: 00:08:44.257 { 00:08:44.257 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:44.257 "listen_address": { 00:08:44.257 "trtype": "rdma", 00:08:44.257 "traddr": "192.168.100.8", 00:08:44.257 "trsvcid": "4421" 00:08:44.257 }, 00:08:44.257 "method": "nvmf_subsystem_remove_listener", 00:08:44.257 "req_id": 1 00:08:44.257 } 00:08:44.257 Got JSON-RPC error response 00:08:44.257 response: 00:08:44.257 { 00:08:44.257 "code": -32602, 00:08:44.257 "message": "Invalid parameters" 00:08:44.257 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:44.257 17:12:53 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21272 -i 0 00:08:44.257 [2024-04-24 17:12:53.442073] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21272: invalid cntlid range [0-65519] 00:08:44.257 17:12:53 -- target/invalid.sh@73 -- # out='request: 00:08:44.257 { 00:08:44.257 "nqn": "nqn.2016-06.io.spdk:cnode21272", 00:08:44.257 "min_cntlid": 0, 00:08:44.257 "method": "nvmf_create_subsystem", 00:08:44.257 "req_id": 1 00:08:44.257 } 00:08:44.257 Got JSON-RPC error response 00:08:44.257 response: 00:08:44.257 { 00:08:44.257 "code": -32602, 00:08:44.257 "message": "Invalid cntlid range [0-65519]" 00:08:44.257 }' 00:08:44.257 17:12:53 -- target/invalid.sh@74 -- # [[ request: 00:08:44.257 { 00:08:44.257 "nqn": "nqn.2016-06.io.spdk:cnode21272", 00:08:44.257 "min_cntlid": 0, 00:08:44.257 "method": "nvmf_create_subsystem", 00:08:44.257 "req_id": 1 00:08:44.257 } 00:08:44.257 Got JSON-RPC error response 00:08:44.257 response: 00:08:44.257 { 00:08:44.257 "code": -32602, 00:08:44.257 "message": "Invalid cntlid range [0-65519]" 00:08:44.257 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:44.257 17:12:53 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28670 -i 65520 00:08:44.516 [2024-04-24 17:12:53.606686] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28670: invalid cntlid range [65520-65519] 00:08:44.516 17:12:53 -- target/invalid.sh@75 -- # out='request: 00:08:44.516 { 00:08:44.516 "nqn": "nqn.2016-06.io.spdk:cnode28670", 00:08:44.516 "min_cntlid": 65520, 00:08:44.516 "method": "nvmf_create_subsystem", 00:08:44.516 "req_id": 1 00:08:44.516 } 00:08:44.516 Got JSON-RPC error response 00:08:44.516 response: 00:08:44.516 { 00:08:44.516 "code": -32602, 00:08:44.516 "message": "Invalid cntlid range [65520-65519]" 00:08:44.516 }' 00:08:44.516 17:12:53 -- target/invalid.sh@76 -- # [[ request: 00:08:44.516 { 00:08:44.516 "nqn": "nqn.2016-06.io.spdk:cnode28670", 00:08:44.516 "min_cntlid": 65520, 00:08:44.516 "method": "nvmf_create_subsystem", 00:08:44.516 "req_id": 1 00:08:44.516 } 00:08:44.516 Got JSON-RPC error response 00:08:44.516 response: 00:08:44.516 { 00:08:44.516 "code": -32602, 00:08:44.516 "message": "Invalid cntlid range [65520-65519]" 00:08:44.516 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:44.516 17:12:53 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8015 -I 0 00:08:44.774 [2024-04-24 17:12:53.775297] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8015: invalid cntlid range [1-0] 00:08:44.774 17:12:53 -- target/invalid.sh@77 -- # out='request: 00:08:44.774 { 00:08:44.774 "nqn": "nqn.2016-06.io.spdk:cnode8015", 00:08:44.774 "max_cntlid": 0, 00:08:44.774 "method": "nvmf_create_subsystem", 00:08:44.774 "req_id": 1 00:08:44.774 } 00:08:44.774 Got JSON-RPC error response 00:08:44.774 response: 00:08:44.774 { 00:08:44.774 "code": -32602, 00:08:44.774 "message": "Invalid cntlid range [1-0]" 00:08:44.774 }' 00:08:44.774 17:12:53 -- target/invalid.sh@78 -- # [[ request: 00:08:44.774 { 00:08:44.774 "nqn": "nqn.2016-06.io.spdk:cnode8015", 00:08:44.774 "max_cntlid": 0, 00:08:44.774 "method": "nvmf_create_subsystem", 00:08:44.774 "req_id": 1 00:08:44.774 } 00:08:44.774 Got JSON-RPC error response 00:08:44.774 response: 00:08:44.774 { 00:08:44.774 "code": -32602, 00:08:44.774 "message": "Invalid cntlid range [1-0]" 00:08:44.774 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:44.774 17:12:53 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28478 -I 65520 00:08:44.774 [2024-04-24 17:12:53.943890] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28478: invalid cntlid range [1-65520] 00:08:44.774 17:12:53 -- target/invalid.sh@79 -- # out='request: 00:08:44.774 { 00:08:44.774 "nqn": "nqn.2016-06.io.spdk:cnode28478", 00:08:44.774 "max_cntlid": 65520, 00:08:44.774 "method": "nvmf_create_subsystem", 00:08:44.774 "req_id": 1 00:08:44.774 } 00:08:44.774 Got JSON-RPC error response 00:08:44.774 response: 00:08:44.774 { 00:08:44.774 "code": -32602, 00:08:44.774 "message": "Invalid cntlid range [1-65520]" 00:08:44.774 }' 00:08:44.775 17:12:53 -- target/invalid.sh@80 -- # [[ request: 00:08:44.775 { 00:08:44.775 "nqn": "nqn.2016-06.io.spdk:cnode28478", 00:08:44.775 "max_cntlid": 65520, 00:08:44.775 "method": "nvmf_create_subsystem", 00:08:44.775 "req_id": 1 00:08:44.775 } 00:08:44.775 Got JSON-RPC error response 00:08:44.775 response: 00:08:44.775 { 00:08:44.775 "code": -32602, 00:08:44.775 "message": "Invalid cntlid range [1-65520]" 00:08:44.775 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:44.775 17:12:53 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30447 -i 6 -I 5 00:08:45.033 [2024-04-24 17:12:54.112535] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30447: invalid cntlid range [6-5] 00:08:45.033 17:12:54 -- target/invalid.sh@83 -- # out='request: 00:08:45.033 { 00:08:45.033 "nqn": "nqn.2016-06.io.spdk:cnode30447", 00:08:45.033 "min_cntlid": 6, 00:08:45.033 "max_cntlid": 5, 00:08:45.033 "method": "nvmf_create_subsystem", 00:08:45.033 "req_id": 1 00:08:45.033 } 00:08:45.033 Got JSON-RPC error response 00:08:45.033 response: 00:08:45.033 { 00:08:45.033 "code": -32602, 00:08:45.033 "message": "Invalid cntlid range [6-5]" 00:08:45.033 }' 00:08:45.033 17:12:54 -- target/invalid.sh@84 -- # [[ request: 00:08:45.033 { 00:08:45.033 "nqn": "nqn.2016-06.io.spdk:cnode30447", 00:08:45.033 "min_cntlid": 6, 00:08:45.033 "max_cntlid": 5, 00:08:45.033 "method": "nvmf_create_subsystem", 00:08:45.033 "req_id": 1 00:08:45.033 } 00:08:45.033 Got JSON-RPC error response 00:08:45.033 response: 00:08:45.033 { 00:08:45.033 "code": -32602, 00:08:45.033 "message": "Invalid cntlid range [6-5]" 00:08:45.033 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:45.033 17:12:54 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:45.033 17:12:54 -- target/invalid.sh@87 -- # out='request: 00:08:45.033 { 00:08:45.033 "name": "foobar", 00:08:45.033 "method": "nvmf_delete_target", 00:08:45.033 "req_id": 1 00:08:45.033 } 00:08:45.033 Got JSON-RPC error response 00:08:45.033 response: 00:08:45.033 { 00:08:45.033 "code": -32602, 00:08:45.033 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:45.033 }' 00:08:45.033 17:12:54 -- target/invalid.sh@88 -- # [[ request: 00:08:45.033 { 00:08:45.033 "name": "foobar", 00:08:45.033 "method": "nvmf_delete_target", 00:08:45.033 "req_id": 1 00:08:45.033 } 00:08:45.033 Got JSON-RPC error response 00:08:45.033 response: 00:08:45.033 { 00:08:45.033 "code": -32602, 00:08:45.033 "message": "The specified target doesn't exist, cannot delete it." 00:08:45.033 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:45.033 17:12:54 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:45.033 17:12:54 -- target/invalid.sh@91 -- # nvmftestfini 00:08:45.033 17:12:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:45.033 17:12:54 -- nvmf/common.sh@117 -- # sync 00:08:45.033 17:12:54 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:45.033 17:12:54 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:45.033 17:12:54 -- nvmf/common.sh@120 -- # set +e 00:08:45.033 17:12:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.033 17:12:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:45.033 rmmod nvme_rdma 00:08:45.033 rmmod nvme_fabrics 00:08:45.293 17:12:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.293 17:12:54 -- nvmf/common.sh@124 -- # set -e 00:08:45.293 17:12:54 -- nvmf/common.sh@125 -- # return 0 00:08:45.293 17:12:54 -- nvmf/common.sh@478 -- # '[' -n 2973136 ']' 00:08:45.293 17:12:54 -- nvmf/common.sh@479 -- # killprocess 2973136 00:08:45.293 17:12:54 -- common/autotest_common.sh@936 -- # '[' -z 2973136 ']' 00:08:45.293 17:12:54 -- common/autotest_common.sh@940 -- # kill -0 2973136 00:08:45.293 17:12:54 -- common/autotest_common.sh@941 -- # uname 00:08:45.293 17:12:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.293 17:12:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2973136 00:08:45.293 17:12:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:45.293 17:12:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:45.293 17:12:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2973136' 00:08:45.293 killing process with pid 2973136 00:08:45.293 17:12:54 -- common/autotest_common.sh@955 -- # kill 2973136 00:08:45.293 17:12:54 -- common/autotest_common.sh@960 -- # wait 2973136 00:08:45.553 17:12:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:45.553 17:12:54 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:45.553 00:08:45.553 real 0m9.575s 00:08:45.553 user 0m19.407s 00:08:45.553 sys 0m4.935s 00:08:45.553 17:12:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:45.553 17:12:54 -- common/autotest_common.sh@10 -- # set +x 00:08:45.553 ************************************ 00:08:45.553 END TEST nvmf_invalid 00:08:45.553 ************************************ 00:08:45.553 17:12:54 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:45.553 17:12:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:45.553 17:12:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.553 17:12:54 -- common/autotest_common.sh@10 -- # set +x 00:08:45.553 ************************************ 00:08:45.553 START TEST nvmf_abort 00:08:45.553 ************************************ 00:08:45.553 17:12:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:45.812 * Looking for test storage... 00:08:45.812 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:45.812 17:12:54 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.812 17:12:54 -- nvmf/common.sh@7 -- # uname -s 00:08:45.812 17:12:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.812 17:12:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.812 17:12:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.812 17:12:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.812 17:12:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.812 17:12:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.812 17:12:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.812 17:12:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.812 17:12:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.812 17:12:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.812 17:12:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:45.812 17:12:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:45.812 17:12:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.812 17:12:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.812 17:12:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.812 17:12:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.812 17:12:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:45.812 17:12:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.812 17:12:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.812 17:12:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.812 17:12:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.812 17:12:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.812 17:12:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.812 17:12:54 -- paths/export.sh@5 -- # export PATH 00:08:45.813 17:12:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.813 17:12:54 -- nvmf/common.sh@47 -- # : 0 00:08:45.813 17:12:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.813 17:12:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.813 17:12:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.813 17:12:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.813 17:12:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.813 17:12:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.813 17:12:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.813 17:12:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.813 17:12:54 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.813 17:12:54 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:45.813 17:12:54 -- target/abort.sh@14 -- # nvmftestinit 00:08:45.813 17:12:54 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:45.813 17:12:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.813 17:12:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:45.813 17:12:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:45.813 17:12:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:45.813 17:12:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.813 17:12:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.813 17:12:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.813 17:12:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:45.813 17:12:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:45.813 17:12:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:45.813 17:12:54 -- common/autotest_common.sh@10 -- # set +x 00:08:51.232 17:12:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:51.232 17:12:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:51.232 17:12:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:51.232 17:12:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:51.232 17:13:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:51.232 17:13:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:51.232 17:13:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:51.232 17:13:00 -- nvmf/common.sh@295 -- # net_devs=() 00:08:51.232 17:13:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:51.232 17:13:00 -- nvmf/common.sh@296 -- # e810=() 00:08:51.232 17:13:00 -- nvmf/common.sh@296 -- # local -ga e810 00:08:51.232 17:13:00 -- nvmf/common.sh@297 -- # x722=() 00:08:51.232 17:13:00 -- nvmf/common.sh@297 -- # local -ga x722 00:08:51.232 17:13:00 -- nvmf/common.sh@298 -- # mlx=() 00:08:51.232 17:13:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:51.232 17:13:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.232 17:13:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.232 17:13:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.232 17:13:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.232 17:13:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.232 17:13:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.232 17:13:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.232 17:13:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.232 17:13:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.232 17:13:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.232 17:13:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.232 17:13:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:51.232 17:13:00 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:51.232 17:13:00 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:51.232 17:13:00 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:51.232 17:13:00 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:51.232 17:13:00 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:51.232 17:13:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:51.232 17:13:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.232 17:13:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:51.232 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:51.233 17:13:00 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:51.233 17:13:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.233 17:13:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:51.233 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:51.233 17:13:00 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:51.233 17:13:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:51.233 17:13:00 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.233 17:13:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.233 17:13:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:51.233 17:13:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.233 17:13:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:51.233 Found net devices under 0000:da:00.0: mlx_0_0 00:08:51.233 17:13:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.233 17:13:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.233 17:13:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.233 17:13:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:51.233 17:13:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.233 17:13:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:51.233 Found net devices under 0000:da:00.1: mlx_0_1 00:08:51.233 17:13:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.233 17:13:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:51.233 17:13:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:51.233 17:13:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:51.233 17:13:00 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:51.233 17:13:00 -- nvmf/common.sh@58 -- # uname 00:08:51.233 17:13:00 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:51.233 17:13:00 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:51.233 17:13:00 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:51.233 17:13:00 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:51.233 17:13:00 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:51.233 17:13:00 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:51.233 17:13:00 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:51.233 17:13:00 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:51.233 17:13:00 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:51.233 17:13:00 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:51.233 17:13:00 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:51.233 17:13:00 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:51.233 17:13:00 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:51.233 17:13:00 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:51.233 17:13:00 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:51.233 17:13:00 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:51.233 17:13:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:51.233 17:13:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.233 17:13:00 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:51.233 17:13:00 -- nvmf/common.sh@105 -- # continue 2 00:08:51.233 17:13:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:51.233 17:13:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.233 17:13:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.233 17:13:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:51.233 17:13:00 -- nvmf/common.sh@105 -- # continue 2 00:08:51.233 17:13:00 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:51.233 17:13:00 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:51.233 17:13:00 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:51.233 17:13:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:51.233 17:13:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:51.233 17:13:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:51.233 17:13:00 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:51.233 17:13:00 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:51.233 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:51.233 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:51.233 altname enp218s0f0np0 00:08:51.233 altname ens818f0np0 00:08:51.233 inet 192.168.100.8/24 scope global mlx_0_0 00:08:51.233 valid_lft forever preferred_lft forever 00:08:51.233 17:13:00 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:51.233 17:13:00 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:51.233 17:13:00 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:51.233 17:13:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:51.233 17:13:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:51.233 17:13:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:51.233 17:13:00 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:51.233 17:13:00 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:51.233 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:51.233 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:51.233 altname enp218s0f1np1 00:08:51.233 altname ens818f1np1 00:08:51.233 inet 192.168.100.9/24 scope global mlx_0_1 00:08:51.233 valid_lft forever preferred_lft forever 00:08:51.233 17:13:00 -- nvmf/common.sh@411 -- # return 0 00:08:51.233 17:13:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:51.233 17:13:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:51.233 17:13:00 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:51.233 17:13:00 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:51.233 17:13:00 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:51.233 17:13:00 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:51.233 17:13:00 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:51.233 17:13:00 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:51.233 17:13:00 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:51.233 17:13:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:51.233 17:13:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.233 17:13:00 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:51.233 17:13:00 -- nvmf/common.sh@105 -- # continue 2 00:08:51.233 17:13:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:51.233 17:13:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.233 17:13:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:51.233 17:13:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:51.233 17:13:00 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:51.233 17:13:00 -- nvmf/common.sh@105 -- # continue 2 00:08:51.233 17:13:00 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:51.233 17:13:00 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:51.233 17:13:00 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:51.233 17:13:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:51.233 17:13:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:51.233 17:13:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:51.233 17:13:00 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:51.233 17:13:00 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:51.233 17:13:00 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:51.233 17:13:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:51.233 17:13:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:51.233 17:13:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:51.233 17:13:00 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:51.233 192.168.100.9' 00:08:51.234 17:13:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:51.234 192.168.100.9' 00:08:51.234 17:13:00 -- nvmf/common.sh@446 -- # head -n 1 00:08:51.234 17:13:00 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:51.234 17:13:00 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:51.234 192.168.100.9' 00:08:51.234 17:13:00 -- nvmf/common.sh@447 -- # tail -n +2 00:08:51.234 17:13:00 -- nvmf/common.sh@447 -- # head -n 1 00:08:51.234 17:13:00 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:51.234 17:13:00 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:51.234 17:13:00 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:51.234 17:13:00 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:51.234 17:13:00 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:51.234 17:13:00 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:51.234 17:13:00 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:51.234 17:13:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:51.234 17:13:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:51.234 17:13:00 -- common/autotest_common.sh@10 -- # set +x 00:08:51.234 17:13:00 -- nvmf/common.sh@470 -- # nvmfpid=2975597 00:08:51.234 17:13:00 -- nvmf/common.sh@471 -- # waitforlisten 2975597 00:08:51.234 17:13:00 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:51.234 17:13:00 -- common/autotest_common.sh@817 -- # '[' -z 2975597 ']' 00:08:51.234 17:13:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.234 17:13:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:51.234 17:13:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.234 17:13:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:51.234 17:13:00 -- common/autotest_common.sh@10 -- # set +x 00:08:51.234 [2024-04-24 17:13:00.266985] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:51.234 [2024-04-24 17:13:00.267033] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.234 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.234 [2024-04-24 17:13:00.324733] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:51.234 [2024-04-24 17:13:00.399834] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.234 [2024-04-24 17:13:00.399893] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.234 [2024-04-24 17:13:00.399900] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.234 [2024-04-24 17:13:00.399906] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.234 [2024-04-24 17:13:00.399911] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.234 [2024-04-24 17:13:00.399960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.234 [2024-04-24 17:13:00.400022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.234 [2024-04-24 17:13:00.400023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.172 17:13:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:52.173 17:13:01 -- common/autotest_common.sh@850 -- # return 0 00:08:52.173 17:13:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:52.173 17:13:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:52.173 17:13:01 -- common/autotest_common.sh@10 -- # set +x 00:08:52.173 17:13:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.173 17:13:01 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:08:52.173 17:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.173 17:13:01 -- common/autotest_common.sh@10 -- # set +x 00:08:52.173 [2024-04-24 17:13:01.132593] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xadf680/0xae3b70) succeed. 00:08:52.173 [2024-04-24 17:13:01.142619] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xae0bd0/0xb25200) succeed. 00:08:52.173 17:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.173 17:13:01 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:52.173 17:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.173 17:13:01 -- common/autotest_common.sh@10 -- # set +x 00:08:52.173 Malloc0 00:08:52.173 17:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.173 17:13:01 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:52.173 17:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.173 17:13:01 -- common/autotest_common.sh@10 -- # set +x 00:08:52.173 Delay0 00:08:52.173 17:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.173 17:13:01 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:52.173 17:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.173 17:13:01 -- common/autotest_common.sh@10 -- # set +x 00:08:52.173 17:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.173 17:13:01 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:52.173 17:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.173 17:13:01 -- common/autotest_common.sh@10 -- # set +x 00:08:52.173 17:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.173 17:13:01 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:52.173 17:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.173 17:13:01 -- common/autotest_common.sh@10 -- # set +x 00:08:52.173 [2024-04-24 17:13:01.296689] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:52.173 17:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.173 17:13:01 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:52.173 17:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.173 17:13:01 -- common/autotest_common.sh@10 -- # set +x 00:08:52.173 17:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.173 17:13:01 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:52.173 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.173 [2024-04-24 17:13:01.389946] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:54.711 Initializing NVMe Controllers 00:08:54.711 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:54.711 controller IO queue size 128 less than required 00:08:54.711 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:54.711 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:54.711 Initialization complete. Launching workers. 00:08:54.711 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 52701 00:08:54.711 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 52762, failed to submit 62 00:08:54.711 success 52702, unsuccess 60, failed 0 00:08:54.711 17:13:03 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:54.711 17:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:54.711 17:13:03 -- common/autotest_common.sh@10 -- # set +x 00:08:54.711 17:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:54.711 17:13:03 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:54.711 17:13:03 -- target/abort.sh@38 -- # nvmftestfini 00:08:54.711 17:13:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:54.711 17:13:03 -- nvmf/common.sh@117 -- # sync 00:08:54.711 17:13:03 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:54.711 17:13:03 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:54.711 17:13:03 -- nvmf/common.sh@120 -- # set +e 00:08:54.711 17:13:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:54.711 17:13:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:54.711 rmmod nvme_rdma 00:08:54.711 rmmod nvme_fabrics 00:08:54.711 17:13:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:54.711 17:13:03 -- nvmf/common.sh@124 -- # set -e 00:08:54.711 17:13:03 -- nvmf/common.sh@125 -- # return 0 00:08:54.711 17:13:03 -- nvmf/common.sh@478 -- # '[' -n 2975597 ']' 00:08:54.711 17:13:03 -- nvmf/common.sh@479 -- # killprocess 2975597 00:08:54.711 17:13:03 -- common/autotest_common.sh@936 -- # '[' -z 2975597 ']' 00:08:54.711 17:13:03 -- common/autotest_common.sh@940 -- # kill -0 2975597 00:08:54.711 17:13:03 -- common/autotest_common.sh@941 -- # uname 00:08:54.711 17:13:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:54.711 17:13:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2975597 00:08:54.711 17:13:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:54.711 17:13:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:54.711 17:13:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2975597' 00:08:54.711 killing process with pid 2975597 00:08:54.711 17:13:03 -- common/autotest_common.sh@955 -- # kill 2975597 00:08:54.711 17:13:03 -- common/autotest_common.sh@960 -- # wait 2975597 00:08:54.711 17:13:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:54.711 17:13:03 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:54.711 00:08:54.711 real 0m9.121s 00:08:54.711 user 0m14.092s 00:08:54.711 sys 0m4.424s 00:08:54.712 17:13:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:54.712 17:13:03 -- common/autotest_common.sh@10 -- # set +x 00:08:54.712 ************************************ 00:08:54.712 END TEST nvmf_abort 00:08:54.712 ************************************ 00:08:54.712 17:13:03 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:54.712 17:13:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:54.712 17:13:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:54.712 17:13:03 -- common/autotest_common.sh@10 -- # set +x 00:08:54.971 ************************************ 00:08:54.971 START TEST nvmf_ns_hotplug_stress 00:08:54.971 ************************************ 00:08:54.971 17:13:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:54.971 * Looking for test storage... 00:08:54.971 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:54.971 17:13:04 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.971 17:13:04 -- nvmf/common.sh@7 -- # uname -s 00:08:54.971 17:13:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.971 17:13:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.971 17:13:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.971 17:13:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.971 17:13:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.971 17:13:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.971 17:13:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.971 17:13:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.971 17:13:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.971 17:13:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.971 17:13:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:54.971 17:13:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:54.971 17:13:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.971 17:13:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.971 17:13:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.971 17:13:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.971 17:13:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:54.971 17:13:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.971 17:13:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.971 17:13:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.971 17:13:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.971 17:13:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.971 17:13:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.971 17:13:04 -- paths/export.sh@5 -- # export PATH 00:08:54.971 17:13:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.971 17:13:04 -- nvmf/common.sh@47 -- # : 0 00:08:54.971 17:13:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:54.971 17:13:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:54.971 17:13:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.971 17:13:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.971 17:13:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.971 17:13:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:54.971 17:13:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:54.971 17:13:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:54.971 17:13:04 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:54.971 17:13:04 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:08:54.971 17:13:04 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:54.971 17:13:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.971 17:13:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:54.971 17:13:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:54.971 17:13:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:54.971 17:13:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.971 17:13:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.971 17:13:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.971 17:13:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:54.971 17:13:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:54.971 17:13:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:54.971 17:13:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.242 17:13:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:00.242 17:13:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:00.242 17:13:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:00.242 17:13:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:00.242 17:13:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:00.242 17:13:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:00.242 17:13:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:00.242 17:13:08 -- nvmf/common.sh@295 -- # net_devs=() 00:09:00.242 17:13:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:00.242 17:13:08 -- nvmf/common.sh@296 -- # e810=() 00:09:00.242 17:13:08 -- nvmf/common.sh@296 -- # local -ga e810 00:09:00.242 17:13:08 -- nvmf/common.sh@297 -- # x722=() 00:09:00.242 17:13:08 -- nvmf/common.sh@297 -- # local -ga x722 00:09:00.242 17:13:08 -- nvmf/common.sh@298 -- # mlx=() 00:09:00.242 17:13:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:00.242 17:13:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.242 17:13:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.242 17:13:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.242 17:13:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.242 17:13:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.242 17:13:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.242 17:13:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.242 17:13:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.242 17:13:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.242 17:13:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.242 17:13:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.242 17:13:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:00.242 17:13:08 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:00.242 17:13:08 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:00.242 17:13:08 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:00.242 17:13:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:00.242 17:13:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.242 17:13:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:00.242 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:00.242 17:13:08 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:00.242 17:13:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.242 17:13:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:00.242 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:00.242 17:13:08 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:00.242 17:13:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:00.242 17:13:08 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.242 17:13:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.242 17:13:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:00.242 17:13:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.242 17:13:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:00.242 Found net devices under 0000:da:00.0: mlx_0_0 00:09:00.242 17:13:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.242 17:13:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.242 17:13:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.242 17:13:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:00.242 17:13:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.242 17:13:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:00.242 Found net devices under 0000:da:00.1: mlx_0_1 00:09:00.242 17:13:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.242 17:13:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:00.242 17:13:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:00.242 17:13:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:00.242 17:13:08 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:00.242 17:13:08 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:00.242 17:13:08 -- nvmf/common.sh@58 -- # uname 00:09:00.242 17:13:08 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:00.242 17:13:09 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:00.242 17:13:09 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:00.242 17:13:09 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:00.242 17:13:09 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:00.242 17:13:09 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:00.242 17:13:09 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:00.242 17:13:09 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:00.242 17:13:09 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:00.242 17:13:09 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:00.242 17:13:09 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:00.242 17:13:09 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:00.242 17:13:09 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:00.242 17:13:09 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:00.242 17:13:09 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:00.242 17:13:09 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:00.242 17:13:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:00.242 17:13:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.242 17:13:09 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:00.242 17:13:09 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:00.242 17:13:09 -- nvmf/common.sh@105 -- # continue 2 00:09:00.243 17:13:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:00.243 17:13:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.243 17:13:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:00.243 17:13:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.243 17:13:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:00.243 17:13:09 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:00.243 17:13:09 -- nvmf/common.sh@105 -- # continue 2 00:09:00.243 17:13:09 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:00.243 17:13:09 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:00.243 17:13:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:00.243 17:13:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:00.243 17:13:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:00.243 17:13:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:00.243 17:13:09 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:00.243 17:13:09 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:00.243 17:13:09 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:00.243 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:00.243 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:00.243 altname enp218s0f0np0 00:09:00.243 altname ens818f0np0 00:09:00.243 inet 192.168.100.8/24 scope global mlx_0_0 00:09:00.243 valid_lft forever preferred_lft forever 00:09:00.243 17:13:09 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:00.243 17:13:09 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:00.243 17:13:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:00.243 17:13:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:00.243 17:13:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:00.243 17:13:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:00.243 17:13:09 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:00.243 17:13:09 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:00.243 17:13:09 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:00.243 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:00.243 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:00.243 altname enp218s0f1np1 00:09:00.243 altname ens818f1np1 00:09:00.243 inet 192.168.100.9/24 scope global mlx_0_1 00:09:00.243 valid_lft forever preferred_lft forever 00:09:00.243 17:13:09 -- nvmf/common.sh@411 -- # return 0 00:09:00.243 17:13:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:00.243 17:13:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:00.243 17:13:09 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:00.243 17:13:09 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:00.243 17:13:09 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:00.243 17:13:09 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:00.243 17:13:09 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:00.243 17:13:09 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:00.243 17:13:09 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:00.243 17:13:09 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:00.243 17:13:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:00.243 17:13:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.243 17:13:09 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:00.243 17:13:09 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:00.243 17:13:09 -- nvmf/common.sh@105 -- # continue 2 00:09:00.243 17:13:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:00.243 17:13:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.243 17:13:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:00.243 17:13:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.243 17:13:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:00.243 17:13:09 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:00.243 17:13:09 -- nvmf/common.sh@105 -- # continue 2 00:09:00.243 17:13:09 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:00.243 17:13:09 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:00.243 17:13:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:00.243 17:13:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:00.243 17:13:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:00.243 17:13:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:00.243 17:13:09 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:00.243 17:13:09 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:00.243 17:13:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:00.243 17:13:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:00.243 17:13:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:00.243 17:13:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:00.243 17:13:09 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:00.243 192.168.100.9' 00:09:00.243 17:13:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:00.243 192.168.100.9' 00:09:00.243 17:13:09 -- nvmf/common.sh@446 -- # head -n 1 00:09:00.243 17:13:09 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:00.243 17:13:09 -- nvmf/common.sh@447 -- # tail -n +2 00:09:00.243 17:13:09 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:00.243 192.168.100.9' 00:09:00.243 17:13:09 -- nvmf/common.sh@447 -- # head -n 1 00:09:00.243 17:13:09 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:00.243 17:13:09 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:00.243 17:13:09 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:00.243 17:13:09 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:00.243 17:13:09 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:00.243 17:13:09 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:00.243 17:13:09 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:09:00.243 17:13:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:00.243 17:13:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:00.243 17:13:09 -- common/autotest_common.sh@10 -- # set +x 00:09:00.243 17:13:09 -- nvmf/common.sh@470 -- # nvmfpid=2977889 00:09:00.243 17:13:09 -- nvmf/common.sh@471 -- # waitforlisten 2977889 00:09:00.243 17:13:09 -- common/autotest_common.sh@817 -- # '[' -z 2977889 ']' 00:09:00.243 17:13:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.243 17:13:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:00.243 17:13:09 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:00.243 17:13:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.243 17:13:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:00.243 17:13:09 -- common/autotest_common.sh@10 -- # set +x 00:09:00.243 [2024-04-24 17:13:09.238168] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:00.243 [2024-04-24 17:13:09.238214] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.243 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.243 [2024-04-24 17:13:09.293876] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:00.243 [2024-04-24 17:13:09.369961] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.243 [2024-04-24 17:13:09.369996] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.243 [2024-04-24 17:13:09.370003] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.243 [2024-04-24 17:13:09.370009] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.243 [2024-04-24 17:13:09.370014] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.243 [2024-04-24 17:13:09.370111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.243 [2024-04-24 17:13:09.370130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.243 [2024-04-24 17:13:09.370131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.810 17:13:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:00.810 17:13:10 -- common/autotest_common.sh@850 -- # return 0 00:09:00.810 17:13:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:00.810 17:13:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:00.810 17:13:10 -- common/autotest_common.sh@10 -- # set +x 00:09:01.068 17:13:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.068 17:13:10 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:09:01.068 17:13:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:01.068 [2024-04-24 17:13:10.250660] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1633680/0x1637b70) succeed. 00:09:01.068 [2024-04-24 17:13:10.260875] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1634bd0/0x1679200) succeed. 00:09:01.326 17:13:10 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:01.326 17:13:10 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:01.585 [2024-04-24 17:13:10.702350] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:01.585 17:13:10 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:01.843 17:13:10 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:01.843 Malloc0 00:09:01.843 17:13:11 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:02.101 Delay0 00:09:02.101 17:13:11 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.360 17:13:11 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:02.360 NULL1 00:09:02.360 17:13:11 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:02.619 17:13:11 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:02.619 17:13:11 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=2977945 00:09:02.619 17:13:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:02.619 17:13:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.619 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.996 Read completed with error (sct=0, sc=11) 00:09:03.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.996 17:13:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.996 17:13:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:09:03.996 17:13:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:04.255 true 00:09:04.255 17:13:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:04.255 17:13:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.191 17:13:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.191 17:13:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:09:05.191 17:13:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:05.191 true 00:09:05.449 17:13:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:05.449 17:13:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 17:13:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.275 17:13:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:09:06.275 17:13:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:06.533 true 00:09:06.533 17:13:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:06.533 17:13:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.470 17:13:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.470 17:13:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:09:07.470 17:13:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:07.729 true 00:09:07.729 17:13:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:07.729 17:13:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 17:13:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.666 17:13:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:09:08.666 17:13:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:08.925 true 00:09:08.925 17:13:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:08.925 17:13:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.860 17:13:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.860 17:13:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:09:09.860 17:13:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:10.119 true 00:09:10.119 17:13:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:10.119 17:13:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.055 17:13:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.055 17:13:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:09:11.055 17:13:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:11.314 true 00:09:11.314 17:13:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:11.314 17:13:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.251 17:13:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.251 17:13:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:09:12.251 17:13:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:12.509 true 00:09:12.509 17:13:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:12.509 17:13:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.446 17:13:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.446 17:13:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:09:13.446 17:13:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:13.704 true 00:09:13.704 17:13:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:13.704 17:13:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.642 17:13:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.642 17:13:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:09:14.643 17:13:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:14.901 true 00:09:14.901 17:13:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:14.901 17:13:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.838 17:13:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.838 17:13:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:09:15.838 17:13:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:16.097 true 00:09:16.097 17:13:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:16.097 17:13:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.033 17:13:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.033 17:13:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:09:17.033 17:13:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:17.292 true 00:09:17.292 17:13:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:17.292 17:13:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.229 17:13:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.229 17:13:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:09:18.229 17:13:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:18.488 true 00:09:18.488 17:13:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:18.488 17:13:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.425 17:13:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.425 17:13:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:09:19.425 17:13:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:19.684 true 00:09:19.684 17:13:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:19.684 17:13:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.620 17:13:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.620 17:13:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:09:20.620 17:13:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:20.620 true 00:09:20.879 17:13:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:20.879 17:13:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.814 17:13:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.815 17:13:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:09:21.815 17:13:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:21.815 true 00:09:22.073 17:13:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:22.073 17:13:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.007 17:13:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.007 17:13:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:09:23.007 17:13:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:23.007 true 00:09:23.266 17:13:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:23.266 17:13:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.091 17:13:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.091 17:13:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:09:24.091 17:13:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:24.350 true 00:09:24.350 17:13:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:24.350 17:13:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.286 17:13:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.286 17:13:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:09:25.286 17:13:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:25.545 true 00:09:25.545 17:13:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:25.545 17:13:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.480 17:13:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.480 17:13:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:09:26.480 17:13:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:26.739 true 00:09:26.739 17:13:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:26.739 17:13:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.676 17:13:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.676 17:13:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:09:27.676 17:13:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:27.934 true 00:09:27.934 17:13:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:27.935 17:13:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.871 17:13:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.871 17:13:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:09:28.871 17:13:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:29.129 true 00:09:29.129 17:13:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:29.129 17:13:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.063 17:13:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.063 17:13:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:09:30.063 17:13:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:30.321 true 00:09:30.321 17:13:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:30.321 17:13:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.256 17:13:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.256 17:13:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:09:31.256 17:13:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:31.515 true 00:09:31.515 17:13:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:31.515 17:13:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.452 17:13:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.452 17:13:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:09:32.452 17:13:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:32.711 true 00:09:32.711 17:13:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:32.711 17:13:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.647 17:13:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.647 17:13:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:09:33.647 17:13:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:33.647 true 00:09:33.905 17:13:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:33.905 17:13:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.905 17:13:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.164 17:13:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:09:34.164 17:13:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:34.164 true 00:09:34.164 17:13:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:34.164 17:13:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.423 17:13:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.682 17:13:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:09:34.682 17:13:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:34.682 true 00:09:34.682 17:13:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:34.682 17:13:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.969 17:13:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.279 17:13:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:09:35.279 17:13:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:35.279 true 00:09:35.279 17:13:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:35.279 17:13:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.588 Initializing NVMe Controllers 00:09:35.588 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:35.588 Controller IO queue size 128, less than required. 00:09:35.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:35.588 Controller IO queue size 128, less than required. 00:09:35.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:35.588 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:35.588 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:35.588 Initialization complete. Launching workers. 00:09:35.588 ======================================================== 00:09:35.588 Latency(us) 00:09:35.588 Device Information : IOPS MiB/s Average min max 00:09:35.588 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5412.83 2.64 21182.66 986.29 1138601.57 00:09:35.588 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 33873.70 16.54 3778.63 1394.61 294575.76 00:09:35.588 ======================================================== 00:09:35.588 Total : 39286.53 19.18 6176.53 986.29 1138601.57 00:09:35.588 00:09:35.588 17:13:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.588 17:13:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:09:35.588 17:13:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:35.871 true 00:09:35.871 17:13:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2977945 00:09:35.871 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (2977945) - No such process 00:09:35.871 17:13:44 -- target/ns_hotplug_stress.sh@44 -- # wait 2977945 00:09:35.871 17:13:44 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:09:35.871 17:13:44 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:09:35.871 17:13:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:35.871 17:13:44 -- nvmf/common.sh@117 -- # sync 00:09:35.871 17:13:44 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:35.871 17:13:44 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:35.871 17:13:44 -- nvmf/common.sh@120 -- # set +e 00:09:35.871 17:13:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.871 17:13:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:35.871 rmmod nvme_rdma 00:09:35.871 rmmod nvme_fabrics 00:09:35.871 17:13:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:35.871 17:13:44 -- nvmf/common.sh@124 -- # set -e 00:09:35.871 17:13:44 -- nvmf/common.sh@125 -- # return 0 00:09:35.871 17:13:44 -- nvmf/common.sh@478 -- # '[' -n 2977889 ']' 00:09:35.871 17:13:44 -- nvmf/common.sh@479 -- # killprocess 2977889 00:09:35.871 17:13:44 -- common/autotest_common.sh@936 -- # '[' -z 2977889 ']' 00:09:35.871 17:13:44 -- common/autotest_common.sh@940 -- # kill -0 2977889 00:09:35.871 17:13:44 -- common/autotest_common.sh@941 -- # uname 00:09:35.871 17:13:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:35.871 17:13:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2977889 00:09:35.871 17:13:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:35.871 17:13:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:35.871 17:13:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2977889' 00:09:35.871 killing process with pid 2977889 00:09:35.871 17:13:45 -- common/autotest_common.sh@955 -- # kill 2977889 00:09:35.871 17:13:45 -- common/autotest_common.sh@960 -- # wait 2977889 00:09:36.131 17:13:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:36.131 17:13:45 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:36.131 00:09:36.131 real 0m41.280s 00:09:36.131 user 2m35.823s 00:09:36.131 sys 0m6.607s 00:09:36.131 17:13:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:36.131 17:13:45 -- common/autotest_common.sh@10 -- # set +x 00:09:36.131 ************************************ 00:09:36.131 END TEST nvmf_ns_hotplug_stress 00:09:36.131 ************************************ 00:09:36.131 17:13:45 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:09:36.131 17:13:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:36.131 17:13:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.131 17:13:45 -- common/autotest_common.sh@10 -- # set +x 00:09:36.390 ************************************ 00:09:36.390 START TEST nvmf_connect_stress 00:09:36.390 ************************************ 00:09:36.390 17:13:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:09:36.390 * Looking for test storage... 00:09:36.390 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:36.390 17:13:45 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.390 17:13:45 -- nvmf/common.sh@7 -- # uname -s 00:09:36.390 17:13:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.390 17:13:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.390 17:13:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.390 17:13:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.390 17:13:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.390 17:13:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.390 17:13:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.390 17:13:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.390 17:13:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.390 17:13:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.390 17:13:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:36.390 17:13:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:36.390 17:13:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.390 17:13:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.390 17:13:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.390 17:13:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.390 17:13:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:36.390 17:13:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.390 17:13:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.390 17:13:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.390 17:13:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.390 17:13:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.390 17:13:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.390 17:13:45 -- paths/export.sh@5 -- # export PATH 00:09:36.390 17:13:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.390 17:13:45 -- nvmf/common.sh@47 -- # : 0 00:09:36.390 17:13:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.390 17:13:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.390 17:13:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.390 17:13:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.390 17:13:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.390 17:13:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.390 17:13:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.390 17:13:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.390 17:13:45 -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:36.391 17:13:45 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:36.391 17:13:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.391 17:13:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:36.391 17:13:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:36.391 17:13:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:36.391 17:13:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.391 17:13:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.391 17:13:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.391 17:13:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:36.391 17:13:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:36.391 17:13:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:36.391 17:13:45 -- common/autotest_common.sh@10 -- # set +x 00:09:41.665 17:13:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:41.665 17:13:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:41.665 17:13:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:41.665 17:13:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:41.665 17:13:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:41.665 17:13:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:41.665 17:13:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:41.665 17:13:50 -- nvmf/common.sh@295 -- # net_devs=() 00:09:41.665 17:13:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:41.665 17:13:50 -- nvmf/common.sh@296 -- # e810=() 00:09:41.665 17:13:50 -- nvmf/common.sh@296 -- # local -ga e810 00:09:41.665 17:13:50 -- nvmf/common.sh@297 -- # x722=() 00:09:41.665 17:13:50 -- nvmf/common.sh@297 -- # local -ga x722 00:09:41.665 17:13:50 -- nvmf/common.sh@298 -- # mlx=() 00:09:41.665 17:13:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:41.665 17:13:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.665 17:13:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.665 17:13:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.665 17:13:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.665 17:13:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.665 17:13:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.665 17:13:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.665 17:13:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.665 17:13:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.665 17:13:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.665 17:13:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.665 17:13:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:41.665 17:13:50 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:41.665 17:13:50 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:41.665 17:13:50 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:41.665 17:13:50 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:41.665 17:13:50 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:41.665 17:13:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:41.665 17:13:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.665 17:13:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:41.665 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:41.665 17:13:50 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:41.665 17:13:50 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:41.665 17:13:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:41.665 17:13:50 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:41.665 17:13:50 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:41.665 17:13:50 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:41.665 17:13:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.665 17:13:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:41.665 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:41.665 17:13:50 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:41.665 17:13:50 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:41.665 17:13:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:41.665 17:13:50 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:41.665 17:13:50 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:41.665 17:13:50 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:41.665 17:13:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:41.665 17:13:50 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:41.665 17:13:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.665 17:13:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.665 17:13:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:41.665 17:13:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.665 17:13:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:41.665 Found net devices under 0000:da:00.0: mlx_0_0 00:09:41.665 17:13:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.665 17:13:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.665 17:13:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.665 17:13:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:41.665 17:13:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.665 17:13:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:41.665 Found net devices under 0000:da:00.1: mlx_0_1 00:09:41.665 17:13:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.665 17:13:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:41.665 17:13:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:41.665 17:13:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:41.665 17:13:50 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:41.666 17:13:50 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:41.666 17:13:50 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:41.666 17:13:50 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:41.666 17:13:50 -- nvmf/common.sh@58 -- # uname 00:09:41.666 17:13:50 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:41.666 17:13:50 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:41.666 17:13:50 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:41.666 17:13:50 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:41.666 17:13:50 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:41.666 17:13:50 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:41.666 17:13:50 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:41.666 17:13:50 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:41.666 17:13:50 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:41.666 17:13:50 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:41.666 17:13:50 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:41.666 17:13:50 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:41.666 17:13:50 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:41.666 17:13:50 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:41.666 17:13:50 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:41.666 17:13:50 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:41.666 17:13:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:41.666 17:13:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.666 17:13:50 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:41.666 17:13:50 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:41.666 17:13:50 -- nvmf/common.sh@105 -- # continue 2 00:09:41.666 17:13:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:41.666 17:13:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.666 17:13:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:41.666 17:13:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.666 17:13:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:41.666 17:13:50 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:41.666 17:13:50 -- nvmf/common.sh@105 -- # continue 2 00:09:41.666 17:13:50 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:41.666 17:13:50 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:41.666 17:13:50 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:41.666 17:13:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:41.666 17:13:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:41.666 17:13:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:41.666 17:13:50 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:41.666 17:13:50 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:41.666 17:13:50 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:41.666 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:41.666 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:41.666 altname enp218s0f0np0 00:09:41.666 altname ens818f0np0 00:09:41.666 inet 192.168.100.8/24 scope global mlx_0_0 00:09:41.666 valid_lft forever preferred_lft forever 00:09:41.666 17:13:50 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:41.666 17:13:50 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:41.666 17:13:50 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:41.666 17:13:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:41.666 17:13:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:41.666 17:13:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:41.666 17:13:50 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:41.666 17:13:50 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:41.666 17:13:50 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:41.666 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:41.666 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:41.666 altname enp218s0f1np1 00:09:41.666 altname ens818f1np1 00:09:41.666 inet 192.168.100.9/24 scope global mlx_0_1 00:09:41.666 valid_lft forever preferred_lft forever 00:09:41.666 17:13:50 -- nvmf/common.sh@411 -- # return 0 00:09:41.666 17:13:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:41.666 17:13:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:41.666 17:13:50 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:41.666 17:13:50 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:41.666 17:13:50 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:41.666 17:13:50 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:41.666 17:13:50 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:41.666 17:13:50 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:41.666 17:13:50 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:41.666 17:13:50 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:41.666 17:13:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:41.666 17:13:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.666 17:13:50 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:41.666 17:13:50 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:41.666 17:13:50 -- nvmf/common.sh@105 -- # continue 2 00:09:41.666 17:13:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:41.666 17:13:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.666 17:13:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:41.666 17:13:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:41.666 17:13:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:41.666 17:13:50 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:41.666 17:13:50 -- nvmf/common.sh@105 -- # continue 2 00:09:41.666 17:13:50 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:41.666 17:13:50 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:41.666 17:13:50 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:41.666 17:13:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:41.666 17:13:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:41.666 17:13:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:41.666 17:13:50 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:41.666 17:13:50 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:41.666 17:13:50 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:41.666 17:13:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:41.666 17:13:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:41.666 17:13:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:41.666 17:13:50 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:41.666 192.168.100.9' 00:09:41.666 17:13:50 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:41.666 192.168.100.9' 00:09:41.666 17:13:50 -- nvmf/common.sh@446 -- # head -n 1 00:09:41.666 17:13:50 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:41.666 17:13:50 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:41.666 192.168.100.9' 00:09:41.666 17:13:50 -- nvmf/common.sh@447 -- # tail -n +2 00:09:41.666 17:13:50 -- nvmf/common.sh@447 -- # head -n 1 00:09:41.666 17:13:50 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:41.666 17:13:50 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:41.666 17:13:50 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:41.666 17:13:50 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:41.666 17:13:50 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:41.666 17:13:50 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:41.666 17:13:50 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:41.666 17:13:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:41.666 17:13:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:41.666 17:13:50 -- common/autotest_common.sh@10 -- # set +x 00:09:41.666 17:13:50 -- nvmf/common.sh@470 -- # nvmfpid=2980703 00:09:41.666 17:13:50 -- nvmf/common.sh@471 -- # waitforlisten 2980703 00:09:41.666 17:13:50 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:41.666 17:13:50 -- common/autotest_common.sh@817 -- # '[' -z 2980703 ']' 00:09:41.666 17:13:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.666 17:13:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:41.666 17:13:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.666 17:13:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:41.666 17:13:50 -- common/autotest_common.sh@10 -- # set +x 00:09:41.666 [2024-04-24 17:13:50.853126] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:41.666 [2024-04-24 17:13:50.853171] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.666 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.666 [2024-04-24 17:13:50.908931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:41.925 [2024-04-24 17:13:50.981689] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.925 [2024-04-24 17:13:50.981734] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.925 [2024-04-24 17:13:50.981741] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.925 [2024-04-24 17:13:50.981747] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.925 [2024-04-24 17:13:50.981752] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.925 [2024-04-24 17:13:50.981869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.925 [2024-04-24 17:13:50.981957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.925 [2024-04-24 17:13:50.981958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.493 17:13:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:42.493 17:13:51 -- common/autotest_common.sh@850 -- # return 0 00:09:42.493 17:13:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:42.493 17:13:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:42.493 17:13:51 -- common/autotest_common.sh@10 -- # set +x 00:09:42.493 17:13:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.493 17:13:51 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:42.493 17:13:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.493 17:13:51 -- common/autotest_common.sh@10 -- # set +x 00:09:42.493 [2024-04-24 17:13:51.718874] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1098680/0x109cb70) succeed. 00:09:42.493 [2024-04-24 17:13:51.728800] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1099bd0/0x10de200) succeed. 00:09:42.753 17:13:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.753 17:13:51 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:42.753 17:13:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.753 17:13:51 -- common/autotest_common.sh@10 -- # set +x 00:09:42.753 17:13:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.753 17:13:51 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:42.753 17:13:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.753 17:13:51 -- common/autotest_common.sh@10 -- # set +x 00:09:42.753 [2024-04-24 17:13:51.837182] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:42.753 17:13:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.753 17:13:51 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:42.753 17:13:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.753 17:13:51 -- common/autotest_common.sh@10 -- # set +x 00:09:42.753 NULL1 00:09:42.753 17:13:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.753 17:13:51 -- target/connect_stress.sh@21 -- # PERF_PID=2980741 00:09:42.753 17:13:51 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:42.753 17:13:51 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:42.753 17:13:51 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # seq 1 20 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:42.753 17:13:51 -- target/connect_stress.sh@28 -- # cat 00:09:42.753 17:13:51 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:42.753 17:13:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:42.753 17:13:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.753 17:13:51 -- common/autotest_common.sh@10 -- # set +x 00:09:43.013 17:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:43.013 17:13:52 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:43.013 17:13:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:43.013 17:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:43.013 17:13:52 -- common/autotest_common.sh@10 -- # set +x 00:09:43.581 17:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:43.581 17:13:52 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:43.581 17:13:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:43.581 17:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:43.581 17:13:52 -- common/autotest_common.sh@10 -- # set +x 00:09:43.839 17:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:43.839 17:13:52 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:43.839 17:13:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:43.839 17:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:43.839 17:13:52 -- common/autotest_common.sh@10 -- # set +x 00:09:44.097 17:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.097 17:13:53 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:44.097 17:13:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.097 17:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.097 17:13:53 -- common/autotest_common.sh@10 -- # set +x 00:09:44.356 17:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.356 17:13:53 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:44.356 17:13:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.356 17:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.356 17:13:53 -- common/autotest_common.sh@10 -- # set +x 00:09:44.924 17:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.924 17:13:53 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:44.924 17:13:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.924 17:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.924 17:13:53 -- common/autotest_common.sh@10 -- # set +x 00:09:45.183 17:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:45.183 17:13:54 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:45.183 17:13:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:45.183 17:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:45.183 17:13:54 -- common/autotest_common.sh@10 -- # set +x 00:09:45.442 17:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:45.442 17:13:54 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:45.442 17:13:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:45.442 17:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:45.442 17:13:54 -- common/autotest_common.sh@10 -- # set +x 00:09:45.701 17:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:45.701 17:13:54 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:45.701 17:13:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:45.701 17:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:45.701 17:13:54 -- common/autotest_common.sh@10 -- # set +x 00:09:45.959 17:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:45.959 17:13:55 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:45.959 17:13:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:45.959 17:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:45.959 17:13:55 -- common/autotest_common.sh@10 -- # set +x 00:09:46.526 17:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:46.526 17:13:55 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:46.526 17:13:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:46.526 17:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:46.526 17:13:55 -- common/autotest_common.sh@10 -- # set +x 00:09:46.785 17:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:46.785 17:13:55 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:46.785 17:13:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:46.785 17:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:46.785 17:13:55 -- common/autotest_common.sh@10 -- # set +x 00:09:47.044 17:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.044 17:13:56 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:47.044 17:13:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:47.044 17:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.044 17:13:56 -- common/autotest_common.sh@10 -- # set +x 00:09:47.302 17:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.302 17:13:56 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:47.302 17:13:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:47.302 17:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.302 17:13:56 -- common/autotest_common.sh@10 -- # set +x 00:09:47.560 17:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.560 17:13:56 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:47.560 17:13:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:47.560 17:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.560 17:13:56 -- common/autotest_common.sh@10 -- # set +x 00:09:48.127 17:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.127 17:13:57 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:48.127 17:13:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:48.127 17:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.127 17:13:57 -- common/autotest_common.sh@10 -- # set +x 00:09:48.385 17:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.385 17:13:57 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:48.385 17:13:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:48.385 17:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.385 17:13:57 -- common/autotest_common.sh@10 -- # set +x 00:09:48.644 17:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.644 17:13:57 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:48.644 17:13:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:48.644 17:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.644 17:13:57 -- common/autotest_common.sh@10 -- # set +x 00:09:48.903 17:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.903 17:13:58 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:48.903 17:13:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:48.903 17:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.903 17:13:58 -- common/autotest_common.sh@10 -- # set +x 00:09:49.469 17:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.469 17:13:58 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:49.469 17:13:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:49.469 17:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.469 17:13:58 -- common/autotest_common.sh@10 -- # set +x 00:09:49.728 17:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.728 17:13:58 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:49.728 17:13:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:49.728 17:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.728 17:13:58 -- common/autotest_common.sh@10 -- # set +x 00:09:49.986 17:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.987 17:13:59 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:49.987 17:13:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:49.987 17:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.987 17:13:59 -- common/autotest_common.sh@10 -- # set +x 00:09:50.245 17:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.245 17:13:59 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:50.245 17:13:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:50.245 17:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.245 17:13:59 -- common/autotest_common.sh@10 -- # set +x 00:09:50.503 17:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.503 17:13:59 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:50.503 17:13:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:50.503 17:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.504 17:13:59 -- common/autotest_common.sh@10 -- # set +x 00:09:51.071 17:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.071 17:14:00 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:51.071 17:14:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:51.071 17:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.071 17:14:00 -- common/autotest_common.sh@10 -- # set +x 00:09:51.330 17:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.330 17:14:00 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:51.330 17:14:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:51.330 17:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.330 17:14:00 -- common/autotest_common.sh@10 -- # set +x 00:09:51.589 17:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.589 17:14:00 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:51.589 17:14:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:51.589 17:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.589 17:14:00 -- common/autotest_common.sh@10 -- # set +x 00:09:51.847 17:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.847 17:14:01 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:51.847 17:14:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:51.847 17:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.847 17:14:01 -- common/autotest_common.sh@10 -- # set +x 00:09:52.414 17:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:52.414 17:14:01 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:52.414 17:14:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:52.414 17:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:52.414 17:14:01 -- common/autotest_common.sh@10 -- # set +x 00:09:52.671 17:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:52.671 17:14:01 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:52.671 17:14:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:52.671 17:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:52.671 17:14:01 -- common/autotest_common.sh@10 -- # set +x 00:09:52.930 17:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:52.930 17:14:02 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:52.930 17:14:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:52.930 17:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:52.930 17:14:02 -- common/autotest_common.sh@10 -- # set +x 00:09:52.930 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:53.322 17:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:53.322 17:14:02 -- target/connect_stress.sh@34 -- # kill -0 2980741 00:09:53.322 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2980741) - No such process 00:09:53.322 17:14:02 -- target/connect_stress.sh@38 -- # wait 2980741 00:09:53.322 17:14:02 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:53.322 17:14:02 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:53.322 17:14:02 -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:53.322 17:14:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:53.322 17:14:02 -- nvmf/common.sh@117 -- # sync 00:09:53.322 17:14:02 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:53.322 17:14:02 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:53.322 17:14:02 -- nvmf/common.sh@120 -- # set +e 00:09:53.322 17:14:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:53.322 17:14:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:53.322 rmmod nvme_rdma 00:09:53.322 rmmod nvme_fabrics 00:09:53.322 17:14:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:53.322 17:14:02 -- nvmf/common.sh@124 -- # set -e 00:09:53.322 17:14:02 -- nvmf/common.sh@125 -- # return 0 00:09:53.322 17:14:02 -- nvmf/common.sh@478 -- # '[' -n 2980703 ']' 00:09:53.322 17:14:02 -- nvmf/common.sh@479 -- # killprocess 2980703 00:09:53.322 17:14:02 -- common/autotest_common.sh@936 -- # '[' -z 2980703 ']' 00:09:53.322 17:14:02 -- common/autotest_common.sh@940 -- # kill -0 2980703 00:09:53.322 17:14:02 -- common/autotest_common.sh@941 -- # uname 00:09:53.322 17:14:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:53.322 17:14:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2980703 00:09:53.322 17:14:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:53.322 17:14:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:53.322 17:14:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2980703' 00:09:53.322 killing process with pid 2980703 00:09:53.322 17:14:02 -- common/autotest_common.sh@955 -- # kill 2980703 00:09:53.322 17:14:02 -- common/autotest_common.sh@960 -- # wait 2980703 00:09:53.579 17:14:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:53.579 17:14:02 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:53.579 00:09:53.579 real 0m17.277s 00:09:53.579 user 0m42.769s 00:09:53.579 sys 0m5.703s 00:09:53.579 17:14:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:53.579 17:14:02 -- common/autotest_common.sh@10 -- # set +x 00:09:53.579 ************************************ 00:09:53.579 END TEST nvmf_connect_stress 00:09:53.579 ************************************ 00:09:53.579 17:14:02 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:09:53.579 17:14:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:53.579 17:14:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.579 17:14:02 -- common/autotest_common.sh@10 -- # set +x 00:09:53.838 ************************************ 00:09:53.838 START TEST nvmf_fused_ordering 00:09:53.838 ************************************ 00:09:53.838 17:14:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:09:53.838 * Looking for test storage... 00:09:53.838 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:53.838 17:14:02 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.838 17:14:02 -- nvmf/common.sh@7 -- # uname -s 00:09:53.838 17:14:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.838 17:14:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.838 17:14:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.838 17:14:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.838 17:14:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.838 17:14:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.838 17:14:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.838 17:14:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.838 17:14:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.838 17:14:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.838 17:14:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:53.838 17:14:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:53.838 17:14:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.838 17:14:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.838 17:14:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.838 17:14:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.838 17:14:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:53.838 17:14:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.838 17:14:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.838 17:14:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.838 17:14:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.838 17:14:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.838 17:14:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.838 17:14:02 -- paths/export.sh@5 -- # export PATH 00:09:53.838 17:14:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.838 17:14:02 -- nvmf/common.sh@47 -- # : 0 00:09:53.838 17:14:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.838 17:14:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.838 17:14:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.838 17:14:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.838 17:14:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.838 17:14:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.838 17:14:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.838 17:14:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.838 17:14:02 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:53.838 17:14:02 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:53.838 17:14:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.838 17:14:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:53.838 17:14:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:53.838 17:14:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:53.838 17:14:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.838 17:14:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.838 17:14:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.838 17:14:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:53.838 17:14:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:53.838 17:14:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:53.838 17:14:02 -- common/autotest_common.sh@10 -- # set +x 00:09:59.112 17:14:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:59.112 17:14:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:59.112 17:14:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:59.112 17:14:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:59.112 17:14:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:59.112 17:14:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:59.112 17:14:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:59.112 17:14:07 -- nvmf/common.sh@295 -- # net_devs=() 00:09:59.112 17:14:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:59.112 17:14:07 -- nvmf/common.sh@296 -- # e810=() 00:09:59.112 17:14:07 -- nvmf/common.sh@296 -- # local -ga e810 00:09:59.112 17:14:07 -- nvmf/common.sh@297 -- # x722=() 00:09:59.112 17:14:07 -- nvmf/common.sh@297 -- # local -ga x722 00:09:59.112 17:14:07 -- nvmf/common.sh@298 -- # mlx=() 00:09:59.112 17:14:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:59.112 17:14:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.112 17:14:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.112 17:14:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.112 17:14:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.112 17:14:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.112 17:14:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.112 17:14:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.112 17:14:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.112 17:14:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.112 17:14:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.112 17:14:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.112 17:14:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:59.112 17:14:07 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:59.112 17:14:07 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:59.112 17:14:07 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:59.112 17:14:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:59.112 17:14:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.112 17:14:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:59.112 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:59.112 17:14:07 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:59.112 17:14:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.112 17:14:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:59.112 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:59.112 17:14:07 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:59.112 17:14:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:59.112 17:14:07 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.112 17:14:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.112 17:14:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:59.112 17:14:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.112 17:14:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:59.112 Found net devices under 0000:da:00.0: mlx_0_0 00:09:59.112 17:14:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.112 17:14:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.112 17:14:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.112 17:14:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:59.112 17:14:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.112 17:14:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:59.112 Found net devices under 0000:da:00.1: mlx_0_1 00:09:59.112 17:14:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.112 17:14:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:59.112 17:14:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:59.112 17:14:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:59.112 17:14:07 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:59.112 17:14:07 -- nvmf/common.sh@58 -- # uname 00:09:59.112 17:14:07 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:59.112 17:14:07 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:59.112 17:14:07 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:59.112 17:14:07 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:59.112 17:14:07 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:59.112 17:14:07 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:59.112 17:14:07 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:59.112 17:14:07 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:59.112 17:14:07 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:59.112 17:14:07 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:59.112 17:14:07 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:59.112 17:14:07 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:59.112 17:14:07 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:59.112 17:14:07 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:59.112 17:14:07 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:59.112 17:14:07 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:59.112 17:14:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:59.112 17:14:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.112 17:14:07 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:59.112 17:14:07 -- nvmf/common.sh@105 -- # continue 2 00:09:59.112 17:14:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:59.112 17:14:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.112 17:14:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.112 17:14:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:59.112 17:14:07 -- nvmf/common.sh@105 -- # continue 2 00:09:59.112 17:14:07 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:59.112 17:14:07 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:59.112 17:14:07 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:59.112 17:14:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:59.112 17:14:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:59.112 17:14:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:59.112 17:14:07 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:59.112 17:14:07 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:59.112 17:14:07 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:59.112 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:59.112 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:59.112 altname enp218s0f0np0 00:09:59.112 altname ens818f0np0 00:09:59.112 inet 192.168.100.8/24 scope global mlx_0_0 00:09:59.112 valid_lft forever preferred_lft forever 00:09:59.112 17:14:07 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:59.112 17:14:07 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:59.112 17:14:07 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:59.112 17:14:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:59.112 17:14:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:59.112 17:14:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:59.112 17:14:08 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:59.112 17:14:08 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:59.112 17:14:08 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:59.112 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:59.112 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:59.112 altname enp218s0f1np1 00:09:59.112 altname ens818f1np1 00:09:59.112 inet 192.168.100.9/24 scope global mlx_0_1 00:09:59.112 valid_lft forever preferred_lft forever 00:09:59.112 17:14:08 -- nvmf/common.sh@411 -- # return 0 00:09:59.112 17:14:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:59.112 17:14:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:59.112 17:14:08 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:59.112 17:14:08 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:59.112 17:14:08 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:59.112 17:14:08 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:59.113 17:14:08 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:59.113 17:14:08 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:59.113 17:14:08 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:59.113 17:14:08 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:59.113 17:14:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:59.113 17:14:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.113 17:14:08 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:59.113 17:14:08 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:59.113 17:14:08 -- nvmf/common.sh@105 -- # continue 2 00:09:59.113 17:14:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:59.113 17:14:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.113 17:14:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:59.113 17:14:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.113 17:14:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:59.113 17:14:08 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:59.113 17:14:08 -- nvmf/common.sh@105 -- # continue 2 00:09:59.113 17:14:08 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:59.113 17:14:08 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:59.113 17:14:08 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:59.113 17:14:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:59.113 17:14:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:59.113 17:14:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:59.113 17:14:08 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:59.113 17:14:08 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:59.113 17:14:08 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:59.113 17:14:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:59.113 17:14:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:59.113 17:14:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:59.113 17:14:08 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:59.113 192.168.100.9' 00:09:59.113 17:14:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:59.113 192.168.100.9' 00:09:59.113 17:14:08 -- nvmf/common.sh@446 -- # head -n 1 00:09:59.113 17:14:08 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:59.113 17:14:08 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:59.113 192.168.100.9' 00:09:59.113 17:14:08 -- nvmf/common.sh@447 -- # tail -n +2 00:09:59.113 17:14:08 -- nvmf/common.sh@447 -- # head -n 1 00:09:59.113 17:14:08 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:59.113 17:14:08 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:59.113 17:14:08 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:59.113 17:14:08 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:59.113 17:14:08 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:59.113 17:14:08 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:59.113 17:14:08 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:59.113 17:14:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:59.113 17:14:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:59.113 17:14:08 -- common/autotest_common.sh@10 -- # set +x 00:09:59.113 17:14:08 -- nvmf/common.sh@470 -- # nvmfpid=2983124 00:09:59.113 17:14:08 -- nvmf/common.sh@471 -- # waitforlisten 2983124 00:09:59.113 17:14:08 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:59.113 17:14:08 -- common/autotest_common.sh@817 -- # '[' -z 2983124 ']' 00:09:59.113 17:14:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.113 17:14:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:59.113 17:14:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.113 17:14:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:59.113 17:14:08 -- common/autotest_common.sh@10 -- # set +x 00:09:59.113 [2024-04-24 17:14:08.150369] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:59.113 [2024-04-24 17:14:08.150414] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.113 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.113 [2024-04-24 17:14:08.205571] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.113 [2024-04-24 17:14:08.276011] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.113 [2024-04-24 17:14:08.276047] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.113 [2024-04-24 17:14:08.276054] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.113 [2024-04-24 17:14:08.276060] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.113 [2024-04-24 17:14:08.276064] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.113 [2024-04-24 17:14:08.276083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.048 17:14:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:00.048 17:14:08 -- common/autotest_common.sh@850 -- # return 0 00:10:00.048 17:14:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:00.048 17:14:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:00.048 17:14:08 -- common/autotest_common.sh@10 -- # set +x 00:10:00.048 17:14:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.048 17:14:08 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:00.048 17:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:00.049 17:14:08 -- common/autotest_common.sh@10 -- # set +x 00:10:00.049 [2024-04-24 17:14:08.994337] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9b80a0/0x9bc590) succeed. 00:10:00.049 [2024-04-24 17:14:09.003793] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9b95a0/0x9fdc20) succeed. 00:10:00.049 17:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:00.049 17:14:09 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:00.049 17:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:00.049 17:14:09 -- common/autotest_common.sh@10 -- # set +x 00:10:00.049 17:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:00.049 17:14:09 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:00.049 17:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:00.049 17:14:09 -- common/autotest_common.sh@10 -- # set +x 00:10:00.049 [2024-04-24 17:14:09.053504] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:00.049 17:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:00.049 17:14:09 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:00.049 17:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:00.049 17:14:09 -- common/autotest_common.sh@10 -- # set +x 00:10:00.049 NULL1 00:10:00.049 17:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:00.049 17:14:09 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:00.049 17:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:00.049 17:14:09 -- common/autotest_common.sh@10 -- # set +x 00:10:00.049 17:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:00.049 17:14:09 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:00.049 17:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:00.049 17:14:09 -- common/autotest_common.sh@10 -- # set +x 00:10:00.049 17:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:00.049 17:14:09 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:00.049 [2024-04-24 17:14:09.097293] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:10:00.049 [2024-04-24 17:14:09.097322] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2983155 ] 00:10:00.049 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.049 Attached to nqn.2016-06.io.spdk:cnode1 00:10:00.049 Namespace ID: 1 size: 1GB 00:10:00.049 fused_ordering(0) 00:10:00.049 fused_ordering(1) 00:10:00.049 fused_ordering(2) 00:10:00.049 fused_ordering(3) 00:10:00.049 fused_ordering(4) 00:10:00.049 fused_ordering(5) 00:10:00.049 fused_ordering(6) 00:10:00.049 fused_ordering(7) 00:10:00.049 fused_ordering(8) 00:10:00.049 fused_ordering(9) 00:10:00.049 fused_ordering(10) 00:10:00.049 fused_ordering(11) 00:10:00.049 fused_ordering(12) 00:10:00.049 fused_ordering(13) 00:10:00.049 fused_ordering(14) 00:10:00.049 fused_ordering(15) 00:10:00.049 fused_ordering(16) 00:10:00.049 fused_ordering(17) 00:10:00.049 fused_ordering(18) 00:10:00.049 fused_ordering(19) 00:10:00.049 fused_ordering(20) 00:10:00.049 fused_ordering(21) 00:10:00.049 fused_ordering(22) 00:10:00.049 fused_ordering(23) 00:10:00.049 fused_ordering(24) 00:10:00.049 fused_ordering(25) 00:10:00.049 fused_ordering(26) 00:10:00.049 fused_ordering(27) 00:10:00.049 fused_ordering(28) 00:10:00.049 fused_ordering(29) 00:10:00.049 fused_ordering(30) 00:10:00.049 fused_ordering(31) 00:10:00.049 fused_ordering(32) 00:10:00.049 fused_ordering(33) 00:10:00.049 fused_ordering(34) 00:10:00.049 fused_ordering(35) 00:10:00.049 fused_ordering(36) 00:10:00.049 fused_ordering(37) 00:10:00.049 fused_ordering(38) 00:10:00.049 fused_ordering(39) 00:10:00.049 fused_ordering(40) 00:10:00.049 fused_ordering(41) 00:10:00.049 fused_ordering(42) 00:10:00.049 fused_ordering(43) 00:10:00.049 fused_ordering(44) 00:10:00.049 fused_ordering(45) 00:10:00.049 fused_ordering(46) 00:10:00.049 fused_ordering(47) 00:10:00.049 fused_ordering(48) 00:10:00.049 fused_ordering(49) 00:10:00.049 fused_ordering(50) 00:10:00.049 fused_ordering(51) 00:10:00.049 fused_ordering(52) 00:10:00.049 fused_ordering(53) 00:10:00.049 fused_ordering(54) 00:10:00.049 fused_ordering(55) 00:10:00.049 fused_ordering(56) 00:10:00.049 fused_ordering(57) 00:10:00.049 fused_ordering(58) 00:10:00.049 fused_ordering(59) 00:10:00.049 fused_ordering(60) 00:10:00.049 fused_ordering(61) 00:10:00.049 fused_ordering(62) 00:10:00.049 fused_ordering(63) 00:10:00.049 fused_ordering(64) 00:10:00.049 fused_ordering(65) 00:10:00.049 fused_ordering(66) 00:10:00.049 fused_ordering(67) 00:10:00.049 fused_ordering(68) 00:10:00.049 fused_ordering(69) 00:10:00.049 fused_ordering(70) 00:10:00.049 fused_ordering(71) 00:10:00.049 fused_ordering(72) 00:10:00.049 fused_ordering(73) 00:10:00.049 fused_ordering(74) 00:10:00.049 fused_ordering(75) 00:10:00.049 fused_ordering(76) 00:10:00.049 fused_ordering(77) 00:10:00.049 fused_ordering(78) 00:10:00.049 fused_ordering(79) 00:10:00.049 fused_ordering(80) 00:10:00.049 fused_ordering(81) 00:10:00.049 fused_ordering(82) 00:10:00.049 fused_ordering(83) 00:10:00.049 fused_ordering(84) 00:10:00.049 fused_ordering(85) 00:10:00.049 fused_ordering(86) 00:10:00.049 fused_ordering(87) 00:10:00.049 fused_ordering(88) 00:10:00.049 fused_ordering(89) 00:10:00.049 fused_ordering(90) 00:10:00.049 fused_ordering(91) 00:10:00.049 fused_ordering(92) 00:10:00.049 fused_ordering(93) 00:10:00.049 fused_ordering(94) 00:10:00.049 fused_ordering(95) 00:10:00.049 fused_ordering(96) 00:10:00.049 fused_ordering(97) 00:10:00.049 fused_ordering(98) 00:10:00.049 fused_ordering(99) 00:10:00.049 fused_ordering(100) 00:10:00.049 fused_ordering(101) 00:10:00.049 fused_ordering(102) 00:10:00.049 fused_ordering(103) 00:10:00.049 fused_ordering(104) 00:10:00.049 fused_ordering(105) 00:10:00.049 fused_ordering(106) 00:10:00.049 fused_ordering(107) 00:10:00.049 fused_ordering(108) 00:10:00.049 fused_ordering(109) 00:10:00.049 fused_ordering(110) 00:10:00.049 fused_ordering(111) 00:10:00.049 fused_ordering(112) 00:10:00.049 fused_ordering(113) 00:10:00.049 fused_ordering(114) 00:10:00.049 fused_ordering(115) 00:10:00.049 fused_ordering(116) 00:10:00.049 fused_ordering(117) 00:10:00.049 fused_ordering(118) 00:10:00.049 fused_ordering(119) 00:10:00.049 fused_ordering(120) 00:10:00.049 fused_ordering(121) 00:10:00.049 fused_ordering(122) 00:10:00.049 fused_ordering(123) 00:10:00.049 fused_ordering(124) 00:10:00.049 fused_ordering(125) 00:10:00.049 fused_ordering(126) 00:10:00.049 fused_ordering(127) 00:10:00.049 fused_ordering(128) 00:10:00.049 fused_ordering(129) 00:10:00.049 fused_ordering(130) 00:10:00.049 fused_ordering(131) 00:10:00.049 fused_ordering(132) 00:10:00.049 fused_ordering(133) 00:10:00.049 fused_ordering(134) 00:10:00.049 fused_ordering(135) 00:10:00.049 fused_ordering(136) 00:10:00.049 fused_ordering(137) 00:10:00.049 fused_ordering(138) 00:10:00.049 fused_ordering(139) 00:10:00.049 fused_ordering(140) 00:10:00.049 fused_ordering(141) 00:10:00.049 fused_ordering(142) 00:10:00.049 fused_ordering(143) 00:10:00.049 fused_ordering(144) 00:10:00.049 fused_ordering(145) 00:10:00.049 fused_ordering(146) 00:10:00.049 fused_ordering(147) 00:10:00.049 fused_ordering(148) 00:10:00.049 fused_ordering(149) 00:10:00.049 fused_ordering(150) 00:10:00.049 fused_ordering(151) 00:10:00.049 fused_ordering(152) 00:10:00.049 fused_ordering(153) 00:10:00.049 fused_ordering(154) 00:10:00.049 fused_ordering(155) 00:10:00.049 fused_ordering(156) 00:10:00.049 fused_ordering(157) 00:10:00.049 fused_ordering(158) 00:10:00.049 fused_ordering(159) 00:10:00.049 fused_ordering(160) 00:10:00.049 fused_ordering(161) 00:10:00.049 fused_ordering(162) 00:10:00.049 fused_ordering(163) 00:10:00.049 fused_ordering(164) 00:10:00.049 fused_ordering(165) 00:10:00.049 fused_ordering(166) 00:10:00.049 fused_ordering(167) 00:10:00.049 fused_ordering(168) 00:10:00.049 fused_ordering(169) 00:10:00.049 fused_ordering(170) 00:10:00.049 fused_ordering(171) 00:10:00.049 fused_ordering(172) 00:10:00.049 fused_ordering(173) 00:10:00.049 fused_ordering(174) 00:10:00.049 fused_ordering(175) 00:10:00.049 fused_ordering(176) 00:10:00.049 fused_ordering(177) 00:10:00.049 fused_ordering(178) 00:10:00.049 fused_ordering(179) 00:10:00.049 fused_ordering(180) 00:10:00.049 fused_ordering(181) 00:10:00.049 fused_ordering(182) 00:10:00.049 fused_ordering(183) 00:10:00.049 fused_ordering(184) 00:10:00.049 fused_ordering(185) 00:10:00.049 fused_ordering(186) 00:10:00.049 fused_ordering(187) 00:10:00.049 fused_ordering(188) 00:10:00.049 fused_ordering(189) 00:10:00.049 fused_ordering(190) 00:10:00.049 fused_ordering(191) 00:10:00.049 fused_ordering(192) 00:10:00.049 fused_ordering(193) 00:10:00.049 fused_ordering(194) 00:10:00.049 fused_ordering(195) 00:10:00.049 fused_ordering(196) 00:10:00.049 fused_ordering(197) 00:10:00.049 fused_ordering(198) 00:10:00.049 fused_ordering(199) 00:10:00.049 fused_ordering(200) 00:10:00.049 fused_ordering(201) 00:10:00.049 fused_ordering(202) 00:10:00.049 fused_ordering(203) 00:10:00.049 fused_ordering(204) 00:10:00.049 fused_ordering(205) 00:10:00.308 fused_ordering(206) 00:10:00.308 fused_ordering(207) 00:10:00.308 fused_ordering(208) 00:10:00.308 fused_ordering(209) 00:10:00.308 fused_ordering(210) 00:10:00.308 fused_ordering(211) 00:10:00.308 fused_ordering(212) 00:10:00.308 fused_ordering(213) 00:10:00.308 fused_ordering(214) 00:10:00.308 fused_ordering(215) 00:10:00.308 fused_ordering(216) 00:10:00.308 fused_ordering(217) 00:10:00.308 fused_ordering(218) 00:10:00.308 fused_ordering(219) 00:10:00.308 fused_ordering(220) 00:10:00.308 fused_ordering(221) 00:10:00.308 fused_ordering(222) 00:10:00.308 fused_ordering(223) 00:10:00.308 fused_ordering(224) 00:10:00.308 fused_ordering(225) 00:10:00.308 fused_ordering(226) 00:10:00.308 fused_ordering(227) 00:10:00.308 fused_ordering(228) 00:10:00.308 fused_ordering(229) 00:10:00.308 fused_ordering(230) 00:10:00.308 fused_ordering(231) 00:10:00.308 fused_ordering(232) 00:10:00.308 fused_ordering(233) 00:10:00.308 fused_ordering(234) 00:10:00.308 fused_ordering(235) 00:10:00.308 fused_ordering(236) 00:10:00.308 fused_ordering(237) 00:10:00.308 fused_ordering(238) 00:10:00.308 fused_ordering(239) 00:10:00.308 fused_ordering(240) 00:10:00.308 fused_ordering(241) 00:10:00.308 fused_ordering(242) 00:10:00.308 fused_ordering(243) 00:10:00.308 fused_ordering(244) 00:10:00.308 fused_ordering(245) 00:10:00.308 fused_ordering(246) 00:10:00.308 fused_ordering(247) 00:10:00.308 fused_ordering(248) 00:10:00.308 fused_ordering(249) 00:10:00.308 fused_ordering(250) 00:10:00.308 fused_ordering(251) 00:10:00.308 fused_ordering(252) 00:10:00.308 fused_ordering(253) 00:10:00.308 fused_ordering(254) 00:10:00.308 fused_ordering(255) 00:10:00.308 fused_ordering(256) 00:10:00.308 fused_ordering(257) 00:10:00.308 fused_ordering(258) 00:10:00.308 fused_ordering(259) 00:10:00.308 fused_ordering(260) 00:10:00.308 fused_ordering(261) 00:10:00.308 fused_ordering(262) 00:10:00.308 fused_ordering(263) 00:10:00.308 fused_ordering(264) 00:10:00.308 fused_ordering(265) 00:10:00.308 fused_ordering(266) 00:10:00.308 fused_ordering(267) 00:10:00.308 fused_ordering(268) 00:10:00.308 fused_ordering(269) 00:10:00.308 fused_ordering(270) 00:10:00.308 fused_ordering(271) 00:10:00.308 fused_ordering(272) 00:10:00.308 fused_ordering(273) 00:10:00.308 fused_ordering(274) 00:10:00.308 fused_ordering(275) 00:10:00.308 fused_ordering(276) 00:10:00.308 fused_ordering(277) 00:10:00.308 fused_ordering(278) 00:10:00.308 fused_ordering(279) 00:10:00.308 fused_ordering(280) 00:10:00.308 fused_ordering(281) 00:10:00.308 fused_ordering(282) 00:10:00.308 fused_ordering(283) 00:10:00.308 fused_ordering(284) 00:10:00.308 fused_ordering(285) 00:10:00.308 fused_ordering(286) 00:10:00.308 fused_ordering(287) 00:10:00.308 fused_ordering(288) 00:10:00.308 fused_ordering(289) 00:10:00.308 fused_ordering(290) 00:10:00.308 fused_ordering(291) 00:10:00.308 fused_ordering(292) 00:10:00.308 fused_ordering(293) 00:10:00.308 fused_ordering(294) 00:10:00.308 fused_ordering(295) 00:10:00.308 fused_ordering(296) 00:10:00.308 fused_ordering(297) 00:10:00.308 fused_ordering(298) 00:10:00.308 fused_ordering(299) 00:10:00.308 fused_ordering(300) 00:10:00.308 fused_ordering(301) 00:10:00.308 fused_ordering(302) 00:10:00.308 fused_ordering(303) 00:10:00.308 fused_ordering(304) 00:10:00.308 fused_ordering(305) 00:10:00.308 fused_ordering(306) 00:10:00.308 fused_ordering(307) 00:10:00.308 fused_ordering(308) 00:10:00.308 fused_ordering(309) 00:10:00.308 fused_ordering(310) 00:10:00.308 fused_ordering(311) 00:10:00.308 fused_ordering(312) 00:10:00.308 fused_ordering(313) 00:10:00.308 fused_ordering(314) 00:10:00.308 fused_ordering(315) 00:10:00.308 fused_ordering(316) 00:10:00.308 fused_ordering(317) 00:10:00.308 fused_ordering(318) 00:10:00.308 fused_ordering(319) 00:10:00.308 fused_ordering(320) 00:10:00.308 fused_ordering(321) 00:10:00.308 fused_ordering(322) 00:10:00.308 fused_ordering(323) 00:10:00.308 fused_ordering(324) 00:10:00.308 fused_ordering(325) 00:10:00.308 fused_ordering(326) 00:10:00.308 fused_ordering(327) 00:10:00.308 fused_ordering(328) 00:10:00.308 fused_ordering(329) 00:10:00.308 fused_ordering(330) 00:10:00.308 fused_ordering(331) 00:10:00.308 fused_ordering(332) 00:10:00.308 fused_ordering(333) 00:10:00.308 fused_ordering(334) 00:10:00.308 fused_ordering(335) 00:10:00.308 fused_ordering(336) 00:10:00.308 fused_ordering(337) 00:10:00.308 fused_ordering(338) 00:10:00.308 fused_ordering(339) 00:10:00.308 fused_ordering(340) 00:10:00.308 fused_ordering(341) 00:10:00.308 fused_ordering(342) 00:10:00.308 fused_ordering(343) 00:10:00.308 fused_ordering(344) 00:10:00.308 fused_ordering(345) 00:10:00.308 fused_ordering(346) 00:10:00.308 fused_ordering(347) 00:10:00.308 fused_ordering(348) 00:10:00.308 fused_ordering(349) 00:10:00.308 fused_ordering(350) 00:10:00.308 fused_ordering(351) 00:10:00.308 fused_ordering(352) 00:10:00.308 fused_ordering(353) 00:10:00.308 fused_ordering(354) 00:10:00.308 fused_ordering(355) 00:10:00.308 fused_ordering(356) 00:10:00.308 fused_ordering(357) 00:10:00.308 fused_ordering(358) 00:10:00.308 fused_ordering(359) 00:10:00.308 fused_ordering(360) 00:10:00.308 fused_ordering(361) 00:10:00.308 fused_ordering(362) 00:10:00.308 fused_ordering(363) 00:10:00.308 fused_ordering(364) 00:10:00.308 fused_ordering(365) 00:10:00.308 fused_ordering(366) 00:10:00.308 fused_ordering(367) 00:10:00.308 fused_ordering(368) 00:10:00.308 fused_ordering(369) 00:10:00.308 fused_ordering(370) 00:10:00.308 fused_ordering(371) 00:10:00.308 fused_ordering(372) 00:10:00.308 fused_ordering(373) 00:10:00.308 fused_ordering(374) 00:10:00.308 fused_ordering(375) 00:10:00.308 fused_ordering(376) 00:10:00.308 fused_ordering(377) 00:10:00.308 fused_ordering(378) 00:10:00.308 fused_ordering(379) 00:10:00.308 fused_ordering(380) 00:10:00.308 fused_ordering(381) 00:10:00.308 fused_ordering(382) 00:10:00.308 fused_ordering(383) 00:10:00.308 fused_ordering(384) 00:10:00.308 fused_ordering(385) 00:10:00.308 fused_ordering(386) 00:10:00.308 fused_ordering(387) 00:10:00.308 fused_ordering(388) 00:10:00.308 fused_ordering(389) 00:10:00.308 fused_ordering(390) 00:10:00.308 fused_ordering(391) 00:10:00.308 fused_ordering(392) 00:10:00.308 fused_ordering(393) 00:10:00.308 fused_ordering(394) 00:10:00.308 fused_ordering(395) 00:10:00.308 fused_ordering(396) 00:10:00.308 fused_ordering(397) 00:10:00.308 fused_ordering(398) 00:10:00.309 fused_ordering(399) 00:10:00.309 fused_ordering(400) 00:10:00.309 fused_ordering(401) 00:10:00.309 fused_ordering(402) 00:10:00.309 fused_ordering(403) 00:10:00.309 fused_ordering(404) 00:10:00.309 fused_ordering(405) 00:10:00.309 fused_ordering(406) 00:10:00.309 fused_ordering(407) 00:10:00.309 fused_ordering(408) 00:10:00.309 fused_ordering(409) 00:10:00.309 fused_ordering(410) 00:10:00.309 fused_ordering(411) 00:10:00.309 fused_ordering(412) 00:10:00.309 fused_ordering(413) 00:10:00.309 fused_ordering(414) 00:10:00.309 fused_ordering(415) 00:10:00.309 fused_ordering(416) 00:10:00.309 fused_ordering(417) 00:10:00.309 fused_ordering(418) 00:10:00.309 fused_ordering(419) 00:10:00.309 fused_ordering(420) 00:10:00.309 fused_ordering(421) 00:10:00.309 fused_ordering(422) 00:10:00.309 fused_ordering(423) 00:10:00.309 fused_ordering(424) 00:10:00.309 fused_ordering(425) 00:10:00.309 fused_ordering(426) 00:10:00.309 fused_ordering(427) 00:10:00.309 fused_ordering(428) 00:10:00.309 fused_ordering(429) 00:10:00.309 fused_ordering(430) 00:10:00.309 fused_ordering(431) 00:10:00.309 fused_ordering(432) 00:10:00.309 fused_ordering(433) 00:10:00.309 fused_ordering(434) 00:10:00.309 fused_ordering(435) 00:10:00.309 fused_ordering(436) 00:10:00.309 fused_ordering(437) 00:10:00.309 fused_ordering(438) 00:10:00.309 fused_ordering(439) 00:10:00.309 fused_ordering(440) 00:10:00.309 fused_ordering(441) 00:10:00.309 fused_ordering(442) 00:10:00.309 fused_ordering(443) 00:10:00.309 fused_ordering(444) 00:10:00.309 fused_ordering(445) 00:10:00.309 fused_ordering(446) 00:10:00.309 fused_ordering(447) 00:10:00.309 fused_ordering(448) 00:10:00.309 fused_ordering(449) 00:10:00.309 fused_ordering(450) 00:10:00.309 fused_ordering(451) 00:10:00.309 fused_ordering(452) 00:10:00.309 fused_ordering(453) 00:10:00.309 fused_ordering(454) 00:10:00.309 fused_ordering(455) 00:10:00.309 fused_ordering(456) 00:10:00.309 fused_ordering(457) 00:10:00.309 fused_ordering(458) 00:10:00.309 fused_ordering(459) 00:10:00.309 fused_ordering(460) 00:10:00.309 fused_ordering(461) 00:10:00.309 fused_ordering(462) 00:10:00.309 fused_ordering(463) 00:10:00.309 fused_ordering(464) 00:10:00.309 fused_ordering(465) 00:10:00.309 fused_ordering(466) 00:10:00.309 fused_ordering(467) 00:10:00.309 fused_ordering(468) 00:10:00.309 fused_ordering(469) 00:10:00.309 fused_ordering(470) 00:10:00.309 fused_ordering(471) 00:10:00.309 fused_ordering(472) 00:10:00.309 fused_ordering(473) 00:10:00.309 fused_ordering(474) 00:10:00.309 fused_ordering(475) 00:10:00.309 fused_ordering(476) 00:10:00.309 fused_ordering(477) 00:10:00.309 fused_ordering(478) 00:10:00.309 fused_ordering(479) 00:10:00.309 fused_ordering(480) 00:10:00.309 fused_ordering(481) 00:10:00.309 fused_ordering(482) 00:10:00.309 fused_ordering(483) 00:10:00.309 fused_ordering(484) 00:10:00.309 fused_ordering(485) 00:10:00.309 fused_ordering(486) 00:10:00.309 fused_ordering(487) 00:10:00.309 fused_ordering(488) 00:10:00.309 fused_ordering(489) 00:10:00.309 fused_ordering(490) 00:10:00.309 fused_ordering(491) 00:10:00.309 fused_ordering(492) 00:10:00.309 fused_ordering(493) 00:10:00.309 fused_ordering(494) 00:10:00.309 fused_ordering(495) 00:10:00.309 fused_ordering(496) 00:10:00.309 fused_ordering(497) 00:10:00.309 fused_ordering(498) 00:10:00.309 fused_ordering(499) 00:10:00.309 fused_ordering(500) 00:10:00.309 fused_ordering(501) 00:10:00.309 fused_ordering(502) 00:10:00.309 fused_ordering(503) 00:10:00.309 fused_ordering(504) 00:10:00.309 fused_ordering(505) 00:10:00.309 fused_ordering(506) 00:10:00.309 fused_ordering(507) 00:10:00.309 fused_ordering(508) 00:10:00.309 fused_ordering(509) 00:10:00.309 fused_ordering(510) 00:10:00.309 fused_ordering(511) 00:10:00.309 fused_ordering(512) 00:10:00.309 fused_ordering(513) 00:10:00.309 fused_ordering(514) 00:10:00.309 fused_ordering(515) 00:10:00.309 fused_ordering(516) 00:10:00.309 fused_ordering(517) 00:10:00.309 fused_ordering(518) 00:10:00.309 fused_ordering(519) 00:10:00.309 fused_ordering(520) 00:10:00.309 fused_ordering(521) 00:10:00.309 fused_ordering(522) 00:10:00.309 fused_ordering(523) 00:10:00.309 fused_ordering(524) 00:10:00.309 fused_ordering(525) 00:10:00.309 fused_ordering(526) 00:10:00.309 fused_ordering(527) 00:10:00.309 fused_ordering(528) 00:10:00.309 fused_ordering(529) 00:10:00.309 fused_ordering(530) 00:10:00.309 fused_ordering(531) 00:10:00.309 fused_ordering(532) 00:10:00.309 fused_ordering(533) 00:10:00.309 fused_ordering(534) 00:10:00.309 fused_ordering(535) 00:10:00.309 fused_ordering(536) 00:10:00.309 fused_ordering(537) 00:10:00.309 fused_ordering(538) 00:10:00.309 fused_ordering(539) 00:10:00.309 fused_ordering(540) 00:10:00.309 fused_ordering(541) 00:10:00.309 fused_ordering(542) 00:10:00.309 fused_ordering(543) 00:10:00.309 fused_ordering(544) 00:10:00.309 fused_ordering(545) 00:10:00.309 fused_ordering(546) 00:10:00.309 fused_ordering(547) 00:10:00.309 fused_ordering(548) 00:10:00.309 fused_ordering(549) 00:10:00.309 fused_ordering(550) 00:10:00.309 fused_ordering(551) 00:10:00.309 fused_ordering(552) 00:10:00.309 fused_ordering(553) 00:10:00.309 fused_ordering(554) 00:10:00.309 fused_ordering(555) 00:10:00.309 fused_ordering(556) 00:10:00.309 fused_ordering(557) 00:10:00.309 fused_ordering(558) 00:10:00.309 fused_ordering(559) 00:10:00.309 fused_ordering(560) 00:10:00.309 fused_ordering(561) 00:10:00.309 fused_ordering(562) 00:10:00.309 fused_ordering(563) 00:10:00.309 fused_ordering(564) 00:10:00.309 fused_ordering(565) 00:10:00.309 fused_ordering(566) 00:10:00.309 fused_ordering(567) 00:10:00.309 fused_ordering(568) 00:10:00.309 fused_ordering(569) 00:10:00.309 fused_ordering(570) 00:10:00.309 fused_ordering(571) 00:10:00.309 fused_ordering(572) 00:10:00.309 fused_ordering(573) 00:10:00.309 fused_ordering(574) 00:10:00.309 fused_ordering(575) 00:10:00.309 fused_ordering(576) 00:10:00.309 fused_ordering(577) 00:10:00.309 fused_ordering(578) 00:10:00.309 fused_ordering(579) 00:10:00.309 fused_ordering(580) 00:10:00.309 fused_ordering(581) 00:10:00.309 fused_ordering(582) 00:10:00.309 fused_ordering(583) 00:10:00.309 fused_ordering(584) 00:10:00.309 fused_ordering(585) 00:10:00.309 fused_ordering(586) 00:10:00.309 fused_ordering(587) 00:10:00.309 fused_ordering(588) 00:10:00.309 fused_ordering(589) 00:10:00.309 fused_ordering(590) 00:10:00.309 fused_ordering(591) 00:10:00.309 fused_ordering(592) 00:10:00.309 fused_ordering(593) 00:10:00.309 fused_ordering(594) 00:10:00.309 fused_ordering(595) 00:10:00.309 fused_ordering(596) 00:10:00.309 fused_ordering(597) 00:10:00.309 fused_ordering(598) 00:10:00.309 fused_ordering(599) 00:10:00.309 fused_ordering(600) 00:10:00.309 fused_ordering(601) 00:10:00.309 fused_ordering(602) 00:10:00.309 fused_ordering(603) 00:10:00.309 fused_ordering(604) 00:10:00.309 fused_ordering(605) 00:10:00.309 fused_ordering(606) 00:10:00.309 fused_ordering(607) 00:10:00.309 fused_ordering(608) 00:10:00.309 fused_ordering(609) 00:10:00.309 fused_ordering(610) 00:10:00.309 fused_ordering(611) 00:10:00.309 fused_ordering(612) 00:10:00.309 fused_ordering(613) 00:10:00.309 fused_ordering(614) 00:10:00.309 fused_ordering(615) 00:10:00.568 fused_ordering(616) 00:10:00.568 fused_ordering(617) 00:10:00.568 fused_ordering(618) 00:10:00.568 fused_ordering(619) 00:10:00.568 fused_ordering(620) 00:10:00.568 fused_ordering(621) 00:10:00.568 fused_ordering(622) 00:10:00.568 fused_ordering(623) 00:10:00.568 fused_ordering(624) 00:10:00.568 fused_ordering(625) 00:10:00.568 fused_ordering(626) 00:10:00.568 fused_ordering(627) 00:10:00.568 fused_ordering(628) 00:10:00.568 fused_ordering(629) 00:10:00.568 fused_ordering(630) 00:10:00.568 fused_ordering(631) 00:10:00.568 fused_ordering(632) 00:10:00.568 fused_ordering(633) 00:10:00.568 fused_ordering(634) 00:10:00.568 fused_ordering(635) 00:10:00.568 fused_ordering(636) 00:10:00.568 fused_ordering(637) 00:10:00.568 fused_ordering(638) 00:10:00.568 fused_ordering(639) 00:10:00.568 fused_ordering(640) 00:10:00.568 fused_ordering(641) 00:10:00.568 fused_ordering(642) 00:10:00.568 fused_ordering(643) 00:10:00.568 fused_ordering(644) 00:10:00.568 fused_ordering(645) 00:10:00.568 fused_ordering(646) 00:10:00.568 fused_ordering(647) 00:10:00.568 fused_ordering(648) 00:10:00.568 fused_ordering(649) 00:10:00.568 fused_ordering(650) 00:10:00.568 fused_ordering(651) 00:10:00.568 fused_ordering(652) 00:10:00.569 fused_ordering(653) 00:10:00.569 fused_ordering(654) 00:10:00.569 fused_ordering(655) 00:10:00.569 fused_ordering(656) 00:10:00.569 fused_ordering(657) 00:10:00.569 fused_ordering(658) 00:10:00.569 fused_ordering(659) 00:10:00.569 fused_ordering(660) 00:10:00.569 fused_ordering(661) 00:10:00.569 fused_ordering(662) 00:10:00.569 fused_ordering(663) 00:10:00.569 fused_ordering(664) 00:10:00.569 fused_ordering(665) 00:10:00.569 fused_ordering(666) 00:10:00.569 fused_ordering(667) 00:10:00.569 fused_ordering(668) 00:10:00.569 fused_ordering(669) 00:10:00.569 fused_ordering(670) 00:10:00.569 fused_ordering(671) 00:10:00.569 fused_ordering(672) 00:10:00.569 fused_ordering(673) 00:10:00.569 fused_ordering(674) 00:10:00.569 fused_ordering(675) 00:10:00.569 fused_ordering(676) 00:10:00.569 fused_ordering(677) 00:10:00.569 fused_ordering(678) 00:10:00.569 fused_ordering(679) 00:10:00.569 fused_ordering(680) 00:10:00.569 fused_ordering(681) 00:10:00.569 fused_ordering(682) 00:10:00.569 fused_ordering(683) 00:10:00.569 fused_ordering(684) 00:10:00.569 fused_ordering(685) 00:10:00.569 fused_ordering(686) 00:10:00.569 fused_ordering(687) 00:10:00.569 fused_ordering(688) 00:10:00.569 fused_ordering(689) 00:10:00.569 fused_ordering(690) 00:10:00.569 fused_ordering(691) 00:10:00.569 fused_ordering(692) 00:10:00.569 fused_ordering(693) 00:10:00.569 fused_ordering(694) 00:10:00.569 fused_ordering(695) 00:10:00.569 fused_ordering(696) 00:10:00.569 fused_ordering(697) 00:10:00.569 fused_ordering(698) 00:10:00.569 fused_ordering(699) 00:10:00.569 fused_ordering(700) 00:10:00.569 fused_ordering(701) 00:10:00.569 fused_ordering(702) 00:10:00.569 fused_ordering(703) 00:10:00.569 fused_ordering(704) 00:10:00.569 fused_ordering(705) 00:10:00.569 fused_ordering(706) 00:10:00.569 fused_ordering(707) 00:10:00.569 fused_ordering(708) 00:10:00.569 fused_ordering(709) 00:10:00.569 fused_ordering(710) 00:10:00.569 fused_ordering(711) 00:10:00.569 fused_ordering(712) 00:10:00.569 fused_ordering(713) 00:10:00.569 fused_ordering(714) 00:10:00.569 fused_ordering(715) 00:10:00.569 fused_ordering(716) 00:10:00.569 fused_ordering(717) 00:10:00.569 fused_ordering(718) 00:10:00.569 fused_ordering(719) 00:10:00.569 fused_ordering(720) 00:10:00.569 fused_ordering(721) 00:10:00.569 fused_ordering(722) 00:10:00.569 fused_ordering(723) 00:10:00.569 fused_ordering(724) 00:10:00.569 fused_ordering(725) 00:10:00.569 fused_ordering(726) 00:10:00.569 fused_ordering(727) 00:10:00.569 fused_ordering(728) 00:10:00.569 fused_ordering(729) 00:10:00.569 fused_ordering(730) 00:10:00.569 fused_ordering(731) 00:10:00.569 fused_ordering(732) 00:10:00.569 fused_ordering(733) 00:10:00.569 fused_ordering(734) 00:10:00.569 fused_ordering(735) 00:10:00.569 fused_ordering(736) 00:10:00.569 fused_ordering(737) 00:10:00.569 fused_ordering(738) 00:10:00.569 fused_ordering(739) 00:10:00.569 fused_ordering(740) 00:10:00.569 fused_ordering(741) 00:10:00.569 fused_ordering(742) 00:10:00.569 fused_ordering(743) 00:10:00.569 fused_ordering(744) 00:10:00.569 fused_ordering(745) 00:10:00.569 fused_ordering(746) 00:10:00.569 fused_ordering(747) 00:10:00.569 fused_ordering(748) 00:10:00.569 fused_ordering(749) 00:10:00.569 fused_ordering(750) 00:10:00.569 fused_ordering(751) 00:10:00.569 fused_ordering(752) 00:10:00.569 fused_ordering(753) 00:10:00.569 fused_ordering(754) 00:10:00.569 fused_ordering(755) 00:10:00.569 fused_ordering(756) 00:10:00.569 fused_ordering(757) 00:10:00.569 fused_ordering(758) 00:10:00.569 fused_ordering(759) 00:10:00.569 fused_ordering(760) 00:10:00.569 fused_ordering(761) 00:10:00.569 fused_ordering(762) 00:10:00.569 fused_ordering(763) 00:10:00.569 fused_ordering(764) 00:10:00.569 fused_ordering(765) 00:10:00.569 fused_ordering(766) 00:10:00.569 fused_ordering(767) 00:10:00.569 fused_ordering(768) 00:10:00.569 fused_ordering(769) 00:10:00.569 fused_ordering(770) 00:10:00.569 fused_ordering(771) 00:10:00.569 fused_ordering(772) 00:10:00.569 fused_ordering(773) 00:10:00.569 fused_ordering(774) 00:10:00.569 fused_ordering(775) 00:10:00.569 fused_ordering(776) 00:10:00.569 fused_ordering(777) 00:10:00.569 fused_ordering(778) 00:10:00.569 fused_ordering(779) 00:10:00.569 fused_ordering(780) 00:10:00.569 fused_ordering(781) 00:10:00.569 fused_ordering(782) 00:10:00.569 fused_ordering(783) 00:10:00.569 fused_ordering(784) 00:10:00.569 fused_ordering(785) 00:10:00.569 fused_ordering(786) 00:10:00.569 fused_ordering(787) 00:10:00.569 fused_ordering(788) 00:10:00.569 fused_ordering(789) 00:10:00.569 fused_ordering(790) 00:10:00.569 fused_ordering(791) 00:10:00.569 fused_ordering(792) 00:10:00.569 fused_ordering(793) 00:10:00.569 fused_ordering(794) 00:10:00.569 fused_ordering(795) 00:10:00.569 fused_ordering(796) 00:10:00.569 fused_ordering(797) 00:10:00.569 fused_ordering(798) 00:10:00.569 fused_ordering(799) 00:10:00.569 fused_ordering(800) 00:10:00.569 fused_ordering(801) 00:10:00.569 fused_ordering(802) 00:10:00.569 fused_ordering(803) 00:10:00.569 fused_ordering(804) 00:10:00.569 fused_ordering(805) 00:10:00.569 fused_ordering(806) 00:10:00.569 fused_ordering(807) 00:10:00.569 fused_ordering(808) 00:10:00.569 fused_ordering(809) 00:10:00.569 fused_ordering(810) 00:10:00.569 fused_ordering(811) 00:10:00.569 fused_ordering(812) 00:10:00.569 fused_ordering(813) 00:10:00.569 fused_ordering(814) 00:10:00.569 fused_ordering(815) 00:10:00.569 fused_ordering(816) 00:10:00.569 fused_ordering(817) 00:10:00.569 fused_ordering(818) 00:10:00.569 fused_ordering(819) 00:10:00.569 fused_ordering(820) 00:10:00.569 fused_ordering(821) 00:10:00.569 fused_ordering(822) 00:10:00.569 fused_ordering(823) 00:10:00.569 fused_ordering(824) 00:10:00.569 fused_ordering(825) 00:10:00.569 fused_ordering(826) 00:10:00.569 fused_ordering(827) 00:10:00.569 fused_ordering(828) 00:10:00.569 fused_ordering(829) 00:10:00.569 fused_ordering(830) 00:10:00.569 fused_ordering(831) 00:10:00.569 fused_ordering(832) 00:10:00.569 fused_ordering(833) 00:10:00.569 fused_ordering(834) 00:10:00.569 fused_ordering(835) 00:10:00.569 fused_ordering(836) 00:10:00.569 fused_ordering(837) 00:10:00.569 fused_ordering(838) 00:10:00.569 fused_ordering(839) 00:10:00.569 fused_ordering(840) 00:10:00.569 fused_ordering(841) 00:10:00.569 fused_ordering(842) 00:10:00.569 fused_ordering(843) 00:10:00.569 fused_ordering(844) 00:10:00.569 fused_ordering(845) 00:10:00.569 fused_ordering(846) 00:10:00.569 fused_ordering(847) 00:10:00.569 fused_ordering(848) 00:10:00.569 fused_ordering(849) 00:10:00.569 fused_ordering(850) 00:10:00.569 fused_ordering(851) 00:10:00.569 fused_ordering(852) 00:10:00.569 fused_ordering(853) 00:10:00.569 fused_ordering(854) 00:10:00.569 fused_ordering(855) 00:10:00.569 fused_ordering(856) 00:10:00.569 fused_ordering(857) 00:10:00.569 fused_ordering(858) 00:10:00.569 fused_ordering(859) 00:10:00.569 fused_ordering(860) 00:10:00.569 fused_ordering(861) 00:10:00.569 fused_ordering(862) 00:10:00.569 fused_ordering(863) 00:10:00.569 fused_ordering(864) 00:10:00.569 fused_ordering(865) 00:10:00.569 fused_ordering(866) 00:10:00.569 fused_ordering(867) 00:10:00.569 fused_ordering(868) 00:10:00.569 fused_ordering(869) 00:10:00.569 fused_ordering(870) 00:10:00.569 fused_ordering(871) 00:10:00.569 fused_ordering(872) 00:10:00.569 fused_ordering(873) 00:10:00.569 fused_ordering(874) 00:10:00.569 fused_ordering(875) 00:10:00.569 fused_ordering(876) 00:10:00.569 fused_ordering(877) 00:10:00.569 fused_ordering(878) 00:10:00.569 fused_ordering(879) 00:10:00.569 fused_ordering(880) 00:10:00.569 fused_ordering(881) 00:10:00.569 fused_ordering(882) 00:10:00.569 fused_ordering(883) 00:10:00.569 fused_ordering(884) 00:10:00.569 fused_ordering(885) 00:10:00.569 fused_ordering(886) 00:10:00.569 fused_ordering(887) 00:10:00.569 fused_ordering(888) 00:10:00.569 fused_ordering(889) 00:10:00.569 fused_ordering(890) 00:10:00.569 fused_ordering(891) 00:10:00.569 fused_ordering(892) 00:10:00.569 fused_ordering(893) 00:10:00.569 fused_ordering(894) 00:10:00.569 fused_ordering(895) 00:10:00.569 fused_ordering(896) 00:10:00.569 fused_ordering(897) 00:10:00.569 fused_ordering(898) 00:10:00.569 fused_ordering(899) 00:10:00.569 fused_ordering(900) 00:10:00.569 fused_ordering(901) 00:10:00.569 fused_ordering(902) 00:10:00.569 fused_ordering(903) 00:10:00.569 fused_ordering(904) 00:10:00.569 fused_ordering(905) 00:10:00.569 fused_ordering(906) 00:10:00.569 fused_ordering(907) 00:10:00.569 fused_ordering(908) 00:10:00.569 fused_ordering(909) 00:10:00.569 fused_ordering(910) 00:10:00.569 fused_ordering(911) 00:10:00.569 fused_ordering(912) 00:10:00.569 fused_ordering(913) 00:10:00.569 fused_ordering(914) 00:10:00.569 fused_ordering(915) 00:10:00.569 fused_ordering(916) 00:10:00.569 fused_ordering(917) 00:10:00.569 fused_ordering(918) 00:10:00.569 fused_ordering(919) 00:10:00.569 fused_ordering(920) 00:10:00.569 fused_ordering(921) 00:10:00.569 fused_ordering(922) 00:10:00.569 fused_ordering(923) 00:10:00.569 fused_ordering(924) 00:10:00.569 fused_ordering(925) 00:10:00.569 fused_ordering(926) 00:10:00.569 fused_ordering(927) 00:10:00.569 fused_ordering(928) 00:10:00.569 fused_ordering(929) 00:10:00.569 fused_ordering(930) 00:10:00.570 fused_ordering(931) 00:10:00.570 fused_ordering(932) 00:10:00.570 fused_ordering(933) 00:10:00.570 fused_ordering(934) 00:10:00.570 fused_ordering(935) 00:10:00.570 fused_ordering(936) 00:10:00.570 fused_ordering(937) 00:10:00.570 fused_ordering(938) 00:10:00.570 fused_ordering(939) 00:10:00.570 fused_ordering(940) 00:10:00.570 fused_ordering(941) 00:10:00.570 fused_ordering(942) 00:10:00.570 fused_ordering(943) 00:10:00.570 fused_ordering(944) 00:10:00.570 fused_ordering(945) 00:10:00.570 fused_ordering(946) 00:10:00.570 fused_ordering(947) 00:10:00.570 fused_ordering(948) 00:10:00.570 fused_ordering(949) 00:10:00.570 fused_ordering(950) 00:10:00.570 fused_ordering(951) 00:10:00.570 fused_ordering(952) 00:10:00.570 fused_ordering(953) 00:10:00.570 fused_ordering(954) 00:10:00.570 fused_ordering(955) 00:10:00.570 fused_ordering(956) 00:10:00.570 fused_ordering(957) 00:10:00.570 fused_ordering(958) 00:10:00.570 fused_ordering(959) 00:10:00.570 fused_ordering(960) 00:10:00.570 fused_ordering(961) 00:10:00.570 fused_ordering(962) 00:10:00.570 fused_ordering(963) 00:10:00.570 fused_ordering(964) 00:10:00.570 fused_ordering(965) 00:10:00.570 fused_ordering(966) 00:10:00.570 fused_ordering(967) 00:10:00.570 fused_ordering(968) 00:10:00.570 fused_ordering(969) 00:10:00.570 fused_ordering(970) 00:10:00.570 fused_ordering(971) 00:10:00.570 fused_ordering(972) 00:10:00.570 fused_ordering(973) 00:10:00.570 fused_ordering(974) 00:10:00.570 fused_ordering(975) 00:10:00.570 fused_ordering(976) 00:10:00.570 fused_ordering(977) 00:10:00.570 fused_ordering(978) 00:10:00.570 fused_ordering(979) 00:10:00.570 fused_ordering(980) 00:10:00.570 fused_ordering(981) 00:10:00.570 fused_ordering(982) 00:10:00.570 fused_ordering(983) 00:10:00.570 fused_ordering(984) 00:10:00.570 fused_ordering(985) 00:10:00.570 fused_ordering(986) 00:10:00.570 fused_ordering(987) 00:10:00.570 fused_ordering(988) 00:10:00.570 fused_ordering(989) 00:10:00.570 fused_ordering(990) 00:10:00.570 fused_ordering(991) 00:10:00.570 fused_ordering(992) 00:10:00.570 fused_ordering(993) 00:10:00.570 fused_ordering(994) 00:10:00.570 fused_ordering(995) 00:10:00.570 fused_ordering(996) 00:10:00.570 fused_ordering(997) 00:10:00.570 fused_ordering(998) 00:10:00.570 fused_ordering(999) 00:10:00.570 fused_ordering(1000) 00:10:00.570 fused_ordering(1001) 00:10:00.570 fused_ordering(1002) 00:10:00.570 fused_ordering(1003) 00:10:00.570 fused_ordering(1004) 00:10:00.570 fused_ordering(1005) 00:10:00.570 fused_ordering(1006) 00:10:00.570 fused_ordering(1007) 00:10:00.570 fused_ordering(1008) 00:10:00.570 fused_ordering(1009) 00:10:00.570 fused_ordering(1010) 00:10:00.570 fused_ordering(1011) 00:10:00.570 fused_ordering(1012) 00:10:00.570 fused_ordering(1013) 00:10:00.570 fused_ordering(1014) 00:10:00.570 fused_ordering(1015) 00:10:00.570 fused_ordering(1016) 00:10:00.570 fused_ordering(1017) 00:10:00.570 fused_ordering(1018) 00:10:00.570 fused_ordering(1019) 00:10:00.570 fused_ordering(1020) 00:10:00.570 fused_ordering(1021) 00:10:00.570 fused_ordering(1022) 00:10:00.570 fused_ordering(1023) 00:10:00.570 17:14:09 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:00.570 17:14:09 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:00.570 17:14:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:00.570 17:14:09 -- nvmf/common.sh@117 -- # sync 00:10:00.570 17:14:09 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:00.570 17:14:09 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:00.570 17:14:09 -- nvmf/common.sh@120 -- # set +e 00:10:00.570 17:14:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:00.570 17:14:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:00.570 rmmod nvme_rdma 00:10:00.570 rmmod nvme_fabrics 00:10:00.570 17:14:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:00.570 17:14:09 -- nvmf/common.sh@124 -- # set -e 00:10:00.570 17:14:09 -- nvmf/common.sh@125 -- # return 0 00:10:00.570 17:14:09 -- nvmf/common.sh@478 -- # '[' -n 2983124 ']' 00:10:00.570 17:14:09 -- nvmf/common.sh@479 -- # killprocess 2983124 00:10:00.570 17:14:09 -- common/autotest_common.sh@936 -- # '[' -z 2983124 ']' 00:10:00.570 17:14:09 -- common/autotest_common.sh@940 -- # kill -0 2983124 00:10:00.570 17:14:09 -- common/autotest_common.sh@941 -- # uname 00:10:00.570 17:14:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:00.570 17:14:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2983124 00:10:00.570 17:14:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:00.570 17:14:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:00.570 17:14:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2983124' 00:10:00.570 killing process with pid 2983124 00:10:00.570 17:14:09 -- common/autotest_common.sh@955 -- # kill 2983124 00:10:00.570 17:14:09 -- common/autotest_common.sh@960 -- # wait 2983124 00:10:00.827 17:14:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:00.827 17:14:10 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:00.827 00:10:00.827 real 0m7.184s 00:10:00.827 user 0m4.219s 00:10:00.827 sys 0m4.142s 00:10:00.827 17:14:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:00.827 17:14:10 -- common/autotest_common.sh@10 -- # set +x 00:10:00.827 ************************************ 00:10:00.827 END TEST nvmf_fused_ordering 00:10:00.827 ************************************ 00:10:01.085 17:14:10 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:10:01.085 17:14:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:01.085 17:14:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:01.085 17:14:10 -- common/autotest_common.sh@10 -- # set +x 00:10:01.085 ************************************ 00:10:01.085 START TEST nvmf_delete_subsystem 00:10:01.085 ************************************ 00:10:01.085 17:14:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:10:01.085 * Looking for test storage... 00:10:01.085 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:01.085 17:14:10 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.085 17:14:10 -- nvmf/common.sh@7 -- # uname -s 00:10:01.085 17:14:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.085 17:14:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.085 17:14:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.085 17:14:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.085 17:14:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.085 17:14:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.085 17:14:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.085 17:14:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.085 17:14:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.085 17:14:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.085 17:14:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:01.085 17:14:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:01.085 17:14:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.085 17:14:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.085 17:14:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.085 17:14:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.085 17:14:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:01.085 17:14:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.085 17:14:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.085 17:14:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.085 17:14:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.085 17:14:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.085 17:14:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.085 17:14:10 -- paths/export.sh@5 -- # export PATH 00:10:01.085 17:14:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.085 17:14:10 -- nvmf/common.sh@47 -- # : 0 00:10:01.085 17:14:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:01.085 17:14:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:01.085 17:14:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.085 17:14:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.085 17:14:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.085 17:14:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:01.085 17:14:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:01.085 17:14:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:01.085 17:14:10 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:01.085 17:14:10 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:01.085 17:14:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.085 17:14:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:01.085 17:14:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:01.085 17:14:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:01.085 17:14:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.085 17:14:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:01.085 17:14:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.085 17:14:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:01.085 17:14:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:01.085 17:14:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:01.085 17:14:10 -- common/autotest_common.sh@10 -- # set +x 00:10:06.480 17:14:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:06.480 17:14:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.480 17:14:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.480 17:14:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.480 17:14:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.480 17:14:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.480 17:14:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.480 17:14:15 -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.480 17:14:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.480 17:14:15 -- nvmf/common.sh@296 -- # e810=() 00:10:06.480 17:14:15 -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.480 17:14:15 -- nvmf/common.sh@297 -- # x722=() 00:10:06.480 17:14:15 -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.480 17:14:15 -- nvmf/common.sh@298 -- # mlx=() 00:10:06.480 17:14:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.480 17:14:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.480 17:14:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.481 17:14:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.481 17:14:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.481 17:14:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.481 17:14:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.481 17:14:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.481 17:14:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.481 17:14:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.481 17:14:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.481 17:14:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.481 17:14:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.481 17:14:15 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:06.481 17:14:15 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:06.481 17:14:15 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:06.481 17:14:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.481 17:14:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.481 17:14:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:06.481 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:06.481 17:14:15 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:06.481 17:14:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.481 17:14:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:06.481 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:06.481 17:14:15 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:06.481 17:14:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.481 17:14:15 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.481 17:14:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.481 17:14:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:06.481 17:14:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.481 17:14:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:06.481 Found net devices under 0000:da:00.0: mlx_0_0 00:10:06.481 17:14:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.481 17:14:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.481 17:14:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.481 17:14:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:06.481 17:14:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.481 17:14:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:06.481 Found net devices under 0000:da:00.1: mlx_0_1 00:10:06.481 17:14:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.481 17:14:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:06.481 17:14:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:06.481 17:14:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:06.481 17:14:15 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:06.481 17:14:15 -- nvmf/common.sh@58 -- # uname 00:10:06.481 17:14:15 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:06.481 17:14:15 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:06.481 17:14:15 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:06.481 17:14:15 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:06.481 17:14:15 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:06.481 17:14:15 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:06.481 17:14:15 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:06.481 17:14:15 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:06.481 17:14:15 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:06.481 17:14:15 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:06.481 17:14:15 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:06.481 17:14:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:06.481 17:14:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:06.481 17:14:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:06.481 17:14:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:06.481 17:14:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:06.481 17:14:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:06.481 17:14:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.481 17:14:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:06.481 17:14:15 -- nvmf/common.sh@105 -- # continue 2 00:10:06.481 17:14:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:06.481 17:14:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.481 17:14:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.481 17:14:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:06.481 17:14:15 -- nvmf/common.sh@105 -- # continue 2 00:10:06.481 17:14:15 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:06.481 17:14:15 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:06.481 17:14:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:06.481 17:14:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:06.481 17:14:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:06.481 17:14:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:06.481 17:14:15 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:06.481 17:14:15 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:06.481 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:06.481 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:06.481 altname enp218s0f0np0 00:10:06.481 altname ens818f0np0 00:10:06.481 inet 192.168.100.8/24 scope global mlx_0_0 00:10:06.481 valid_lft forever preferred_lft forever 00:10:06.481 17:14:15 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:06.481 17:14:15 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:06.481 17:14:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:06.481 17:14:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:06.481 17:14:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:06.481 17:14:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:06.481 17:14:15 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:06.481 17:14:15 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:06.481 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:06.481 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:06.481 altname enp218s0f1np1 00:10:06.481 altname ens818f1np1 00:10:06.481 inet 192.168.100.9/24 scope global mlx_0_1 00:10:06.481 valid_lft forever preferred_lft forever 00:10:06.481 17:14:15 -- nvmf/common.sh@411 -- # return 0 00:10:06.481 17:14:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:06.481 17:14:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:06.481 17:14:15 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:06.481 17:14:15 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:06.481 17:14:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:06.481 17:14:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:06.481 17:14:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:06.481 17:14:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:06.481 17:14:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:06.481 17:14:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:06.481 17:14:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.481 17:14:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:06.481 17:14:15 -- nvmf/common.sh@105 -- # continue 2 00:10:06.481 17:14:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:06.481 17:14:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.481 17:14:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:06.481 17:14:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:06.481 17:14:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:06.481 17:14:15 -- nvmf/common.sh@105 -- # continue 2 00:10:06.481 17:14:15 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:06.481 17:14:15 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:06.481 17:14:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:06.481 17:14:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:06.481 17:14:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:06.481 17:14:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:06.481 17:14:15 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:06.481 17:14:15 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:06.481 17:14:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:06.481 17:14:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:06.481 17:14:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:06.481 17:14:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:06.481 17:14:15 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:06.481 192.168.100.9' 00:10:06.481 17:14:15 -- nvmf/common.sh@446 -- # head -n 1 00:10:06.481 17:14:15 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:06.481 192.168.100.9' 00:10:06.481 17:14:15 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:06.481 17:14:15 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:06.481 192.168.100.9' 00:10:06.481 17:14:15 -- nvmf/common.sh@447 -- # tail -n +2 00:10:06.481 17:14:15 -- nvmf/common.sh@447 -- # head -n 1 00:10:06.482 17:14:15 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:06.482 17:14:15 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:06.482 17:14:15 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:06.482 17:14:15 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:06.482 17:14:15 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:06.482 17:14:15 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:06.482 17:14:15 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:06.482 17:14:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:06.482 17:14:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:06.482 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:10:06.482 17:14:15 -- nvmf/common.sh@470 -- # nvmfpid=2985387 00:10:06.482 17:14:15 -- nvmf/common.sh@471 -- # waitforlisten 2985387 00:10:06.482 17:14:15 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:06.482 17:14:15 -- common/autotest_common.sh@817 -- # '[' -z 2985387 ']' 00:10:06.482 17:14:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.482 17:14:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:06.482 17:14:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.482 17:14:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:06.482 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:10:06.482 [2024-04-24 17:14:15.261144] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:10:06.482 [2024-04-24 17:14:15.261185] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.482 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.482 [2024-04-24 17:14:15.315232] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:06.482 [2024-04-24 17:14:15.384909] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.482 [2024-04-24 17:14:15.384951] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.482 [2024-04-24 17:14:15.384957] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.482 [2024-04-24 17:14:15.384962] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.482 [2024-04-24 17:14:15.384967] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.482 [2024-04-24 17:14:15.385031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.482 [2024-04-24 17:14:15.385033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.049 17:14:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:07.050 17:14:16 -- common/autotest_common.sh@850 -- # return 0 00:10:07.050 17:14:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:07.050 17:14:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:07.050 17:14:16 -- common/autotest_common.sh@10 -- # set +x 00:10:07.050 17:14:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.050 17:14:16 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:07.050 17:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.050 17:14:16 -- common/autotest_common.sh@10 -- # set +x 00:10:07.050 [2024-04-24 17:14:16.108398] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18808b0/0x1884da0) succeed. 00:10:07.050 [2024-04-24 17:14:16.117264] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1881db0/0x18c6430) succeed. 00:10:07.050 17:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.050 17:14:16 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:07.050 17:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.050 17:14:16 -- common/autotest_common.sh@10 -- # set +x 00:10:07.050 17:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.050 17:14:16 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:07.050 17:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.050 17:14:16 -- common/autotest_common.sh@10 -- # set +x 00:10:07.050 [2024-04-24 17:14:16.197225] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:07.050 17:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.050 17:14:16 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:07.050 17:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.050 17:14:16 -- common/autotest_common.sh@10 -- # set +x 00:10:07.050 NULL1 00:10:07.050 17:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.050 17:14:16 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:07.050 17:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.050 17:14:16 -- common/autotest_common.sh@10 -- # set +x 00:10:07.050 Delay0 00:10:07.050 17:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.050 17:14:16 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.050 17:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.050 17:14:16 -- common/autotest_common.sh@10 -- # set +x 00:10:07.050 17:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.050 17:14:16 -- target/delete_subsystem.sh@28 -- # perf_pid=2985420 00:10:07.050 17:14:16 -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:07.050 17:14:16 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:07.050 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.050 [2024-04-24 17:14:16.294380] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:09.583 17:14:18 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.583 17:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.583 17:14:18 -- common/autotest_common.sh@10 -- # set +x 00:10:10.151 NVMe io qpair process completion error 00:10:10.151 NVMe io qpair process completion error 00:10:10.151 NVMe io qpair process completion error 00:10:10.151 NVMe io qpair process completion error 00:10:10.151 NVMe io qpair process completion error 00:10:10.151 NVMe io qpair process completion error 00:10:10.151 NVMe io qpair process completion error 00:10:10.151 17:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:10.151 17:14:19 -- target/delete_subsystem.sh@34 -- # delay=0 00:10:10.151 17:14:19 -- target/delete_subsystem.sh@35 -- # kill -0 2985420 00:10:10.151 17:14:19 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:10.718 17:14:19 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:10.718 17:14:19 -- target/delete_subsystem.sh@35 -- # kill -0 2985420 00:10:10.718 17:14:19 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:11.286 Write completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Write completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Write completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Write completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Write completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Write completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Write completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.286 Read completed with error (sct=0, sc=8) 00:10:11.286 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 starting I/O failed: -6 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Write completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.287 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Write completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 Read completed with error (sct=0, sc=8) 00:10:11.288 17:14:20 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:11.288 17:14:20 -- target/delete_subsystem.sh@35 -- # kill -0 2985420 00:10:11.288 17:14:20 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:11.288 [2024-04-24 17:14:20.393405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:10:11.288 [2024-04-24 17:14:20.393447] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:11.288 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:11.288 Initializing NVMe Controllers 00:10:11.288 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:11.288 Controller IO queue size 128, less than required. 00:10:11.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:11.288 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:11.288 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:11.288 Initialization complete. Launching workers. 00:10:11.288 ======================================================== 00:10:11.288 Latency(us) 00:10:11.288 Device Information : IOPS MiB/s Average min max 00:10:11.288 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.46 0.04 1594012.31 1000212.11 2977245.37 00:10:11.288 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.46 0.04 1595595.23 1000185.41 2978490.52 00:10:11.288 ======================================================== 00:10:11.288 Total : 160.92 0.08 1594803.77 1000185.41 2978490.52 00:10:11.288 00:10:11.855 17:14:20 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:11.855 17:14:20 -- target/delete_subsystem.sh@35 -- # kill -0 2985420 00:10:11.855 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2985420) - No such process 00:10:11.855 17:14:20 -- target/delete_subsystem.sh@45 -- # NOT wait 2985420 00:10:11.855 17:14:20 -- common/autotest_common.sh@638 -- # local es=0 00:10:11.855 17:14:20 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 2985420 00:10:11.855 17:14:20 -- common/autotest_common.sh@626 -- # local arg=wait 00:10:11.855 17:14:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:11.855 17:14:20 -- common/autotest_common.sh@630 -- # type -t wait 00:10:11.855 17:14:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:11.855 17:14:20 -- common/autotest_common.sh@641 -- # wait 2985420 00:10:11.855 17:14:20 -- common/autotest_common.sh@641 -- # es=1 00:10:11.855 17:14:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:11.855 17:14:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:11.855 17:14:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:11.855 17:14:20 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:11.855 17:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:11.855 17:14:20 -- common/autotest_common.sh@10 -- # set +x 00:10:11.855 17:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:11.855 17:14:20 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:11.855 17:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:11.855 17:14:20 -- common/autotest_common.sh@10 -- # set +x 00:10:11.855 [2024-04-24 17:14:20.910173] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:11.855 17:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:11.855 17:14:20 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.855 17:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:11.855 17:14:20 -- common/autotest_common.sh@10 -- # set +x 00:10:11.855 17:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:11.855 17:14:20 -- target/delete_subsystem.sh@54 -- # perf_pid=2985486 00:10:11.855 17:14:20 -- target/delete_subsystem.sh@56 -- # delay=0 00:10:11.855 17:14:20 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:11.855 17:14:20 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:11.855 17:14:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:11.855 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.855 [2024-04-24 17:14:20.989900] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:12.421 17:14:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:12.421 17:14:21 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:12.421 17:14:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:12.988 17:14:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:12.988 17:14:21 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:12.988 17:14:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:13.246 17:14:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:13.246 17:14:22 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:13.246 17:14:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:13.813 17:14:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:13.813 17:14:22 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:13.813 17:14:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:14.380 17:14:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:14.380 17:14:23 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:14.380 17:14:23 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:14.947 17:14:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:14.947 17:14:23 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:14.947 17:14:23 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:15.514 17:14:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:15.514 17:14:24 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:15.514 17:14:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:15.772 17:14:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:15.772 17:14:24 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:15.772 17:14:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:16.340 17:14:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:16.340 17:14:25 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:16.340 17:14:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:16.907 17:14:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:16.907 17:14:25 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:16.907 17:14:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:17.474 17:14:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:17.474 17:14:26 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:17.474 17:14:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:17.732 17:14:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:17.732 17:14:26 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:17.732 17:14:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:18.299 17:14:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:18.299 17:14:27 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:18.299 17:14:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:18.866 17:14:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:18.866 17:14:27 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:18.866 17:14:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:19.125 Initializing NVMe Controllers 00:10:19.125 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:19.125 Controller IO queue size 128, less than required. 00:10:19.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:19.125 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:19.125 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:19.125 Initialization complete. Launching workers. 00:10:19.125 ======================================================== 00:10:19.125 Latency(us) 00:10:19.125 Device Information : IOPS MiB/s Average min max 00:10:19.125 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001622.86 1000054.82 1004140.61 00:10:19.125 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002698.82 1000212.41 1006543.43 00:10:19.125 ======================================================== 00:10:19.125 Total : 256.00 0.12 1002160.84 1000054.82 1006543.43 00:10:19.125 00:10:19.384 17:14:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:19.384 17:14:28 -- target/delete_subsystem.sh@57 -- # kill -0 2985486 00:10:19.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2985486) - No such process 00:10:19.384 17:14:28 -- target/delete_subsystem.sh@67 -- # wait 2985486 00:10:19.384 17:14:28 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:19.384 17:14:28 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:19.384 17:14:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:19.384 17:14:28 -- nvmf/common.sh@117 -- # sync 00:10:19.384 17:14:28 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:19.384 17:14:28 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:19.384 17:14:28 -- nvmf/common.sh@120 -- # set +e 00:10:19.384 17:14:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:19.384 17:14:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:19.384 rmmod nvme_rdma 00:10:19.384 rmmod nvme_fabrics 00:10:19.384 17:14:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:19.384 17:14:28 -- nvmf/common.sh@124 -- # set -e 00:10:19.384 17:14:28 -- nvmf/common.sh@125 -- # return 0 00:10:19.384 17:14:28 -- nvmf/common.sh@478 -- # '[' -n 2985387 ']' 00:10:19.384 17:14:28 -- nvmf/common.sh@479 -- # killprocess 2985387 00:10:19.384 17:14:28 -- common/autotest_common.sh@936 -- # '[' -z 2985387 ']' 00:10:19.384 17:14:28 -- common/autotest_common.sh@940 -- # kill -0 2985387 00:10:19.384 17:14:28 -- common/autotest_common.sh@941 -- # uname 00:10:19.384 17:14:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:19.384 17:14:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2985387 00:10:19.384 17:14:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:19.384 17:14:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:19.384 17:14:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2985387' 00:10:19.384 killing process with pid 2985387 00:10:19.384 17:14:28 -- common/autotest_common.sh@955 -- # kill 2985387 00:10:19.384 17:14:28 -- common/autotest_common.sh@960 -- # wait 2985387 00:10:19.643 17:14:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:19.643 17:14:28 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:19.643 00:10:19.643 real 0m18.635s 00:10:19.643 user 0m49.586s 00:10:19.643 sys 0m4.763s 00:10:19.643 17:14:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:19.643 17:14:28 -- common/autotest_common.sh@10 -- # set +x 00:10:19.643 ************************************ 00:10:19.643 END TEST nvmf_delete_subsystem 00:10:19.643 ************************************ 00:10:19.643 17:14:28 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:10:19.643 17:14:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:19.643 17:14:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:19.643 17:14:28 -- common/autotest_common.sh@10 -- # set +x 00:10:19.903 ************************************ 00:10:19.903 START TEST nvmf_ns_masking 00:10:19.903 ************************************ 00:10:19.903 17:14:28 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:10:19.903 * Looking for test storage... 00:10:19.903 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:19.903 17:14:29 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.903 17:14:29 -- nvmf/common.sh@7 -- # uname -s 00:10:19.903 17:14:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.903 17:14:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.903 17:14:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.903 17:14:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.903 17:14:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.903 17:14:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.903 17:14:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.903 17:14:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.903 17:14:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.903 17:14:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.903 17:14:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:19.903 17:14:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:19.903 17:14:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.903 17:14:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.903 17:14:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.903 17:14:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.903 17:14:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:19.903 17:14:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.903 17:14:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.903 17:14:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.903 17:14:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.903 17:14:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.903 17:14:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.903 17:14:29 -- paths/export.sh@5 -- # export PATH 00:10:19.903 17:14:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.903 17:14:29 -- nvmf/common.sh@47 -- # : 0 00:10:19.903 17:14:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.903 17:14:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.903 17:14:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.903 17:14:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.903 17:14:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.903 17:14:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.903 17:14:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.903 17:14:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.903 17:14:29 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:19.903 17:14:29 -- target/ns_masking.sh@11 -- # loops=5 00:10:19.903 17:14:29 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:19.903 17:14:29 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:10:19.903 17:14:29 -- target/ns_masking.sh@15 -- # uuidgen 00:10:19.903 17:14:29 -- target/ns_masking.sh@15 -- # HOSTID=3b6198cb-517b-4f47-8b14-d07d721f2eb3 00:10:19.903 17:14:29 -- target/ns_masking.sh@44 -- # nvmftestinit 00:10:19.903 17:14:29 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:19.903 17:14:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.903 17:14:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:19.903 17:14:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:19.903 17:14:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:19.903 17:14:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.903 17:14:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.903 17:14:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.903 17:14:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:19.903 17:14:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:19.903 17:14:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:19.903 17:14:29 -- common/autotest_common.sh@10 -- # set +x 00:10:25.174 17:14:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:25.174 17:14:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:25.174 17:14:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:25.174 17:14:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:25.174 17:14:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:25.174 17:14:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:25.174 17:14:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:25.174 17:14:33 -- nvmf/common.sh@295 -- # net_devs=() 00:10:25.174 17:14:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:25.174 17:14:33 -- nvmf/common.sh@296 -- # e810=() 00:10:25.174 17:14:33 -- nvmf/common.sh@296 -- # local -ga e810 00:10:25.174 17:14:33 -- nvmf/common.sh@297 -- # x722=() 00:10:25.174 17:14:33 -- nvmf/common.sh@297 -- # local -ga x722 00:10:25.174 17:14:33 -- nvmf/common.sh@298 -- # mlx=() 00:10:25.174 17:14:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:25.174 17:14:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:25.174 17:14:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:25.174 17:14:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:25.174 17:14:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:25.174 17:14:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:25.174 17:14:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:25.174 17:14:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:25.174 17:14:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:25.174 17:14:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:25.174 17:14:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:25.174 17:14:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:25.174 17:14:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:25.174 17:14:33 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:25.174 17:14:33 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:25.174 17:14:33 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:25.174 17:14:33 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:25.174 17:14:33 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:25.174 17:14:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:25.174 17:14:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:25.174 17:14:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:25.174 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:25.174 17:14:33 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:25.174 17:14:33 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:25.174 17:14:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:25.174 17:14:33 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:25.174 17:14:33 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:25.174 17:14:33 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:25.174 17:14:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:25.174 17:14:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:25.174 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:25.174 17:14:33 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:25.174 17:14:33 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:25.174 17:14:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:25.174 17:14:33 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:25.174 17:14:33 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:25.174 17:14:33 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:25.174 17:14:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:25.174 17:14:33 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:25.174 17:14:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:25.174 17:14:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.174 17:14:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:25.174 17:14:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.174 17:14:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:25.174 Found net devices under 0000:da:00.0: mlx_0_0 00:10:25.174 17:14:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.174 17:14:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:25.174 17:14:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.174 17:14:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:25.174 17:14:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.174 17:14:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:25.174 Found net devices under 0000:da:00.1: mlx_0_1 00:10:25.174 17:14:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.174 17:14:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:25.175 17:14:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:25.175 17:14:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:25.175 17:14:33 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:25.175 17:14:33 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:25.175 17:14:33 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:25.175 17:14:33 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:25.175 17:14:33 -- nvmf/common.sh@58 -- # uname 00:10:25.175 17:14:33 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:25.175 17:14:33 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:25.175 17:14:33 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:25.175 17:14:33 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:25.175 17:14:33 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:25.175 17:14:33 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:25.175 17:14:33 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:25.175 17:14:33 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:25.175 17:14:33 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:25.175 17:14:33 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:25.175 17:14:33 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:25.175 17:14:33 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:25.175 17:14:33 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:25.175 17:14:33 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:25.175 17:14:33 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:25.175 17:14:33 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:25.175 17:14:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:25.175 17:14:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:25.175 17:14:33 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:25.175 17:14:33 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:25.175 17:14:33 -- nvmf/common.sh@105 -- # continue 2 00:10:25.175 17:14:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:25.175 17:14:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:25.175 17:14:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:25.175 17:14:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:25.175 17:14:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:25.175 17:14:33 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:25.175 17:14:33 -- nvmf/common.sh@105 -- # continue 2 00:10:25.175 17:14:33 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:25.175 17:14:33 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:25.175 17:14:33 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:25.175 17:14:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:25.175 17:14:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:25.175 17:14:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:25.175 17:14:33 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:25.175 17:14:33 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:25.175 17:14:33 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:25.175 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:25.175 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:25.175 altname enp218s0f0np0 00:10:25.175 altname ens818f0np0 00:10:25.175 inet 192.168.100.8/24 scope global mlx_0_0 00:10:25.175 valid_lft forever preferred_lft forever 00:10:25.175 17:14:33 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:25.175 17:14:33 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:25.175 17:14:33 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:25.175 17:14:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:25.175 17:14:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:25.175 17:14:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:25.175 17:14:33 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:25.175 17:14:33 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:25.175 17:14:33 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:25.175 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:25.175 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:25.175 altname enp218s0f1np1 00:10:25.175 altname ens818f1np1 00:10:25.175 inet 192.168.100.9/24 scope global mlx_0_1 00:10:25.175 valid_lft forever preferred_lft forever 00:10:25.175 17:14:33 -- nvmf/common.sh@411 -- # return 0 00:10:25.175 17:14:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:25.175 17:14:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:25.175 17:14:33 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:25.175 17:14:33 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:25.175 17:14:33 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:25.175 17:14:33 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:25.175 17:14:33 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:25.175 17:14:33 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:25.175 17:14:33 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:25.175 17:14:33 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:25.175 17:14:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:25.175 17:14:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:25.175 17:14:33 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:25.175 17:14:33 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:25.175 17:14:33 -- nvmf/common.sh@105 -- # continue 2 00:10:25.175 17:14:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:25.175 17:14:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:25.175 17:14:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:25.175 17:14:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:25.175 17:14:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:25.175 17:14:33 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:25.175 17:14:33 -- nvmf/common.sh@105 -- # continue 2 00:10:25.175 17:14:33 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:25.175 17:14:33 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:25.175 17:14:33 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:25.175 17:14:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:25.175 17:14:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:25.175 17:14:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:25.175 17:14:33 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:25.175 17:14:33 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:25.175 17:14:33 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:25.175 17:14:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:25.175 17:14:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:25.175 17:14:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:25.175 17:14:33 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:25.175 192.168.100.9' 00:10:25.175 17:14:33 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:25.175 192.168.100.9' 00:10:25.175 17:14:33 -- nvmf/common.sh@446 -- # head -n 1 00:10:25.175 17:14:33 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:25.175 17:14:33 -- nvmf/common.sh@447 -- # tail -n +2 00:10:25.175 17:14:33 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:25.175 192.168.100.9' 00:10:25.175 17:14:33 -- nvmf/common.sh@447 -- # head -n 1 00:10:25.175 17:14:33 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:25.175 17:14:33 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:25.175 17:14:33 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:25.175 17:14:33 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:25.175 17:14:33 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:25.175 17:14:33 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:25.175 17:14:33 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:10:25.175 17:14:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:25.175 17:14:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:25.175 17:14:33 -- common/autotest_common.sh@10 -- # set +x 00:10:25.175 17:14:33 -- nvmf/common.sh@470 -- # nvmfpid=2987820 00:10:25.175 17:14:33 -- nvmf/common.sh@471 -- # waitforlisten 2987820 00:10:25.175 17:14:33 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:25.175 17:14:33 -- common/autotest_common.sh@817 -- # '[' -z 2987820 ']' 00:10:25.175 17:14:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.175 17:14:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:25.175 17:14:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.175 17:14:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:25.175 17:14:33 -- common/autotest_common.sh@10 -- # set +x 00:10:25.175 [2024-04-24 17:14:33.748223] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:10:25.175 [2024-04-24 17:14:33.748269] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.175 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.175 [2024-04-24 17:14:33.802979] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.175 [2024-04-24 17:14:33.879905] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.175 [2024-04-24 17:14:33.879945] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.175 [2024-04-24 17:14:33.879952] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.175 [2024-04-24 17:14:33.879958] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.175 [2024-04-24 17:14:33.879963] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.175 [2024-04-24 17:14:33.880013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.175 [2024-04-24 17:14:33.880108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.175 [2024-04-24 17:14:33.880196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.176 [2024-04-24 17:14:33.880197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.434 17:14:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:25.434 17:14:34 -- common/autotest_common.sh@850 -- # return 0 00:10:25.434 17:14:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:25.434 17:14:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:25.434 17:14:34 -- common/autotest_common.sh@10 -- # set +x 00:10:25.434 17:14:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.434 17:14:34 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:25.692 [2024-04-24 17:14:34.748768] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b47f60/0x1b4c450) succeed. 00:10:25.692 [2024-04-24 17:14:34.759134] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b49550/0x1b8dae0) succeed. 00:10:25.692 17:14:34 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:10:25.692 17:14:34 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:10:25.692 17:14:34 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:25.951 Malloc1 00:10:25.951 17:14:35 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:26.210 Malloc2 00:10:26.210 17:14:35 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:26.210 17:14:35 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:26.469 17:14:35 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:26.728 [2024-04-24 17:14:35.762888] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:26.728 17:14:35 -- target/ns_masking.sh@61 -- # connect 00:10:26.728 17:14:35 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3b6198cb-517b-4f47-8b14-d07d721f2eb3 -a 192.168.100.8 -s 4420 -i 4 00:10:26.986 17:14:36 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:10:26.986 17:14:36 -- common/autotest_common.sh@1184 -- # local i=0 00:10:26.986 17:14:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.986 17:14:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:26.986 17:14:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:28.890 17:14:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:28.890 17:14:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:28.890 17:14:38 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.890 17:14:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:28.890 17:14:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.890 17:14:38 -- common/autotest_common.sh@1194 -- # return 0 00:10:28.890 17:14:38 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:28.890 17:14:38 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:28.890 17:14:38 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:28.890 17:14:38 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:28.890 17:14:38 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:10:28.890 17:14:38 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:28.890 17:14:38 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:28.890 [ 0]:0x1 00:10:29.148 17:14:38 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:29.148 17:14:38 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:29.148 17:14:38 -- target/ns_masking.sh@40 -- # nguid=b4ddaaf3cd4f48659a1cad5e5000ffd3 00:10:29.148 17:14:38 -- target/ns_masking.sh@41 -- # [[ b4ddaaf3cd4f48659a1cad5e5000ffd3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:29.148 17:14:38 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:29.148 17:14:38 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:10:29.148 17:14:38 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:29.148 17:14:38 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:29.148 [ 0]:0x1 00:10:29.148 17:14:38 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:29.148 17:14:38 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:29.406 17:14:38 -- target/ns_masking.sh@40 -- # nguid=b4ddaaf3cd4f48659a1cad5e5000ffd3 00:10:29.406 17:14:38 -- target/ns_masking.sh@41 -- # [[ b4ddaaf3cd4f48659a1cad5e5000ffd3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:29.406 17:14:38 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:10:29.406 17:14:38 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:29.406 17:14:38 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:29.406 [ 1]:0x2 00:10:29.406 17:14:38 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:29.406 17:14:38 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:29.406 17:14:38 -- target/ns_masking.sh@40 -- # nguid=7dac9c8f4bdb499bb8d52194143aa23c 00:10:29.407 17:14:38 -- target/ns_masking.sh@41 -- # [[ 7dac9c8f4bdb499bb8d52194143aa23c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:29.407 17:14:38 -- target/ns_masking.sh@69 -- # disconnect 00:10:29.407 17:14:38 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:29.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.665 17:14:38 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.923 17:14:39 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:30.182 17:14:39 -- target/ns_masking.sh@77 -- # connect 1 00:10:30.182 17:14:39 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3b6198cb-517b-4f47-8b14-d07d721f2eb3 -a 192.168.100.8 -s 4420 -i 4 00:10:30.439 17:14:39 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:30.439 17:14:39 -- common/autotest_common.sh@1184 -- # local i=0 00:10:30.439 17:14:39 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:30.439 17:14:39 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:10:30.439 17:14:39 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:10:30.439 17:14:39 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:32.341 17:14:41 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:32.341 17:14:41 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:32.341 17:14:41 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:32.341 17:14:41 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:32.341 17:14:41 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:32.341 17:14:41 -- common/autotest_common.sh@1194 -- # return 0 00:10:32.341 17:14:41 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:32.341 17:14:41 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:32.341 17:14:41 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:32.341 17:14:41 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:32.341 17:14:41 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:10:32.341 17:14:41 -- common/autotest_common.sh@638 -- # local es=0 00:10:32.341 17:14:41 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:32.341 17:14:41 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:32.341 17:14:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:32.341 17:14:41 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:32.341 17:14:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:32.341 17:14:41 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:32.341 17:14:41 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:32.341 17:14:41 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:32.341 17:14:41 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:32.341 17:14:41 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:32.341 17:14:41 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:32.341 17:14:41 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:32.341 17:14:41 -- common/autotest_common.sh@641 -- # es=1 00:10:32.341 17:14:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:32.341 17:14:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:32.341 17:14:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:32.341 17:14:41 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:10:32.341 17:14:41 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:32.341 17:14:41 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:32.341 [ 0]:0x2 00:10:32.341 17:14:41 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:32.341 17:14:41 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:32.598 17:14:41 -- target/ns_masking.sh@40 -- # nguid=7dac9c8f4bdb499bb8d52194143aa23c 00:10:32.598 17:14:41 -- target/ns_masking.sh@41 -- # [[ 7dac9c8f4bdb499bb8d52194143aa23c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:32.598 17:14:41 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:32.598 17:14:41 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:10:32.598 17:14:41 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:32.598 17:14:41 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:32.598 [ 0]:0x1 00:10:32.598 17:14:41 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:32.598 17:14:41 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:32.598 17:14:41 -- target/ns_masking.sh@40 -- # nguid=b4ddaaf3cd4f48659a1cad5e5000ffd3 00:10:32.598 17:14:41 -- target/ns_masking.sh@41 -- # [[ b4ddaaf3cd4f48659a1cad5e5000ffd3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:32.598 17:14:41 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:10:32.598 17:14:41 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:32.598 17:14:41 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:32.856 [ 1]:0x2 00:10:32.856 17:14:41 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:32.856 17:14:41 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:32.856 17:14:41 -- target/ns_masking.sh@40 -- # nguid=7dac9c8f4bdb499bb8d52194143aa23c 00:10:32.856 17:14:41 -- target/ns_masking.sh@41 -- # [[ 7dac9c8f4bdb499bb8d52194143aa23c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:32.856 17:14:41 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:32.856 17:14:42 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:10:32.856 17:14:42 -- common/autotest_common.sh@638 -- # local es=0 00:10:32.856 17:14:42 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:32.856 17:14:42 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:32.856 17:14:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:32.856 17:14:42 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:32.856 17:14:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:32.856 17:14:42 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:32.856 17:14:42 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:32.856 17:14:42 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:32.856 17:14:42 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:32.856 17:14:42 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:33.117 17:14:42 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:33.117 17:14:42 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:33.117 17:14:42 -- common/autotest_common.sh@641 -- # es=1 00:10:33.117 17:14:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:33.117 17:14:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:33.117 17:14:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:33.117 17:14:42 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:10:33.117 17:14:42 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:33.117 17:14:42 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:33.117 [ 0]:0x2 00:10:33.117 17:14:42 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:33.117 17:14:42 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:33.117 17:14:42 -- target/ns_masking.sh@40 -- # nguid=7dac9c8f4bdb499bb8d52194143aa23c 00:10:33.117 17:14:42 -- target/ns_masking.sh@41 -- # [[ 7dac9c8f4bdb499bb8d52194143aa23c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:33.117 17:14:42 -- target/ns_masking.sh@91 -- # disconnect 00:10:33.117 17:14:42 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:33.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.375 17:14:42 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:33.634 17:14:42 -- target/ns_masking.sh@95 -- # connect 2 00:10:33.634 17:14:42 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3b6198cb-517b-4f47-8b14-d07d721f2eb3 -a 192.168.100.8 -s 4420 -i 4 00:10:33.892 17:14:42 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:33.892 17:14:42 -- common/autotest_common.sh@1184 -- # local i=0 00:10:33.892 17:14:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:33.892 17:14:42 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:10:33.892 17:14:42 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:10:33.892 17:14:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:35.794 17:14:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:35.794 17:14:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:35.794 17:14:44 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.794 17:14:44 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:10:35.794 17:14:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.794 17:14:44 -- common/autotest_common.sh@1194 -- # return 0 00:10:35.794 17:14:44 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:35.794 17:14:44 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:35.794 17:14:45 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:35.794 17:14:45 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:35.794 17:14:45 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:10:35.794 17:14:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:35.794 17:14:45 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:35.794 [ 0]:0x1 00:10:35.794 17:14:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:35.794 17:14:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:36.052 17:14:45 -- target/ns_masking.sh@40 -- # nguid=b4ddaaf3cd4f48659a1cad5e5000ffd3 00:10:36.052 17:14:45 -- target/ns_masking.sh@41 -- # [[ b4ddaaf3cd4f48659a1cad5e5000ffd3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:36.052 17:14:45 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:10:36.053 17:14:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:36.053 17:14:45 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:36.053 [ 1]:0x2 00:10:36.053 17:14:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:36.053 17:14:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:36.053 17:14:45 -- target/ns_masking.sh@40 -- # nguid=7dac9c8f4bdb499bb8d52194143aa23c 00:10:36.053 17:14:45 -- target/ns_masking.sh@41 -- # [[ 7dac9c8f4bdb499bb8d52194143aa23c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:36.053 17:14:45 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:36.053 17:14:45 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:10:36.053 17:14:45 -- common/autotest_common.sh@638 -- # local es=0 00:10:36.053 17:14:45 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:36.053 17:14:45 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:36.053 17:14:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:36.053 17:14:45 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:36.053 17:14:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:36.053 17:14:45 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:36.053 17:14:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:36.053 17:14:45 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:36.053 17:14:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:36.053 17:14:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:36.311 17:14:45 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:36.311 17:14:45 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:36.311 17:14:45 -- common/autotest_common.sh@641 -- # es=1 00:10:36.311 17:14:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:36.311 17:14:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:36.311 17:14:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:36.311 17:14:45 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:10:36.311 17:14:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:36.311 17:14:45 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:36.311 [ 0]:0x2 00:10:36.311 17:14:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:36.311 17:14:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:36.311 17:14:45 -- target/ns_masking.sh@40 -- # nguid=7dac9c8f4bdb499bb8d52194143aa23c 00:10:36.311 17:14:45 -- target/ns_masking.sh@41 -- # [[ 7dac9c8f4bdb499bb8d52194143aa23c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:36.311 17:14:45 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:36.311 17:14:45 -- common/autotest_common.sh@638 -- # local es=0 00:10:36.311 17:14:45 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:36.311 17:14:45 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:36.311 17:14:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:36.311 17:14:45 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:36.312 17:14:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:36.312 17:14:45 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:36.312 17:14:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:36.312 17:14:45 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:36.312 17:14:45 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:36.312 17:14:45 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:36.312 [2024-04-24 17:14:45.530988] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:36.312 request: 00:10:36.312 { 00:10:36.312 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:36.312 "nsid": 2, 00:10:36.312 "host": "nqn.2016-06.io.spdk:host1", 00:10:36.312 "method": "nvmf_ns_remove_host", 00:10:36.312 "req_id": 1 00:10:36.312 } 00:10:36.312 Got JSON-RPC error response 00:10:36.312 response: 00:10:36.312 { 00:10:36.312 "code": -32602, 00:10:36.312 "message": "Invalid parameters" 00:10:36.312 } 00:10:36.570 17:14:45 -- common/autotest_common.sh@641 -- # es=1 00:10:36.570 17:14:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:36.570 17:14:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:36.570 17:14:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:36.570 17:14:45 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:10:36.570 17:14:45 -- common/autotest_common.sh@638 -- # local es=0 00:10:36.570 17:14:45 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:36.570 17:14:45 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:36.570 17:14:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:36.570 17:14:45 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:36.570 17:14:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:36.570 17:14:45 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:36.570 17:14:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:36.570 17:14:45 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:36.570 17:14:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:36.570 17:14:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:36.570 17:14:45 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:36.570 17:14:45 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:36.570 17:14:45 -- common/autotest_common.sh@641 -- # es=1 00:10:36.570 17:14:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:36.570 17:14:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:36.570 17:14:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:36.570 17:14:45 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:10:36.570 17:14:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:36.570 17:14:45 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:36.570 [ 0]:0x2 00:10:36.570 17:14:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:36.570 17:14:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:36.570 17:14:45 -- target/ns_masking.sh@40 -- # nguid=7dac9c8f4bdb499bb8d52194143aa23c 00:10:36.570 17:14:45 -- target/ns_masking.sh@41 -- # [[ 7dac9c8f4bdb499bb8d52194143aa23c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:36.570 17:14:45 -- target/ns_masking.sh@108 -- # disconnect 00:10:36.570 17:14:45 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.828 17:14:45 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.087 17:14:46 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:10:37.087 17:14:46 -- target/ns_masking.sh@114 -- # nvmftestfini 00:10:37.087 17:14:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:37.087 17:14:46 -- nvmf/common.sh@117 -- # sync 00:10:37.087 17:14:46 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:37.087 17:14:46 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:37.087 17:14:46 -- nvmf/common.sh@120 -- # set +e 00:10:37.087 17:14:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:37.087 17:14:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:37.087 rmmod nvme_rdma 00:10:37.087 rmmod nvme_fabrics 00:10:37.087 17:14:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:37.087 17:14:46 -- nvmf/common.sh@124 -- # set -e 00:10:37.087 17:14:46 -- nvmf/common.sh@125 -- # return 0 00:10:37.087 17:14:46 -- nvmf/common.sh@478 -- # '[' -n 2987820 ']' 00:10:37.087 17:14:46 -- nvmf/common.sh@479 -- # killprocess 2987820 00:10:37.087 17:14:46 -- common/autotest_common.sh@936 -- # '[' -z 2987820 ']' 00:10:37.087 17:14:46 -- common/autotest_common.sh@940 -- # kill -0 2987820 00:10:37.087 17:14:46 -- common/autotest_common.sh@941 -- # uname 00:10:37.087 17:14:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:37.087 17:14:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2987820 00:10:37.087 17:14:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:37.087 17:14:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:37.087 17:14:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2987820' 00:10:37.087 killing process with pid 2987820 00:10:37.087 17:14:46 -- common/autotest_common.sh@955 -- # kill 2987820 00:10:37.087 17:14:46 -- common/autotest_common.sh@960 -- # wait 2987820 00:10:37.345 17:14:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:37.345 17:14:46 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:37.345 00:10:37.345 real 0m17.607s 00:10:37.345 user 0m54.454s 00:10:37.345 sys 0m4.680s 00:10:37.345 17:14:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:37.346 17:14:46 -- common/autotest_common.sh@10 -- # set +x 00:10:37.346 ************************************ 00:10:37.346 END TEST nvmf_ns_masking 00:10:37.346 ************************************ 00:10:37.604 17:14:46 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:37.604 17:14:46 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:10:37.604 17:14:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:37.604 17:14:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:37.604 17:14:46 -- common/autotest_common.sh@10 -- # set +x 00:10:37.604 ************************************ 00:10:37.604 START TEST nvmf_nvme_cli 00:10:37.604 ************************************ 00:10:37.604 17:14:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:10:37.604 * Looking for test storage... 00:10:37.604 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:37.604 17:14:46 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.604 17:14:46 -- nvmf/common.sh@7 -- # uname -s 00:10:37.604 17:14:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.604 17:14:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.604 17:14:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.604 17:14:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.604 17:14:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.604 17:14:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.604 17:14:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.604 17:14:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.604 17:14:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.604 17:14:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.604 17:14:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:37.604 17:14:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:37.604 17:14:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.604 17:14:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.604 17:14:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.604 17:14:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.604 17:14:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:37.604 17:14:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.604 17:14:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.604 17:14:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.604 17:14:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.604 17:14:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.605 17:14:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.605 17:14:46 -- paths/export.sh@5 -- # export PATH 00:10:37.605 17:14:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.605 17:14:46 -- nvmf/common.sh@47 -- # : 0 00:10:37.605 17:14:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:37.605 17:14:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:37.605 17:14:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.605 17:14:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.605 17:14:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.605 17:14:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:37.605 17:14:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:37.605 17:14:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:37.605 17:14:46 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.605 17:14:46 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.605 17:14:46 -- target/nvme_cli.sh@14 -- # devs=() 00:10:37.605 17:14:46 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:37.605 17:14:46 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:37.605 17:14:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.605 17:14:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:37.605 17:14:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:37.605 17:14:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:37.605 17:14:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.605 17:14:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.605 17:14:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.605 17:14:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:37.605 17:14:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:37.605 17:14:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:37.605 17:14:46 -- common/autotest_common.sh@10 -- # set +x 00:10:42.955 17:14:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:42.955 17:14:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:42.955 17:14:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:42.955 17:14:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:42.955 17:14:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:42.955 17:14:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:42.955 17:14:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:42.955 17:14:51 -- nvmf/common.sh@295 -- # net_devs=() 00:10:42.955 17:14:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:42.955 17:14:51 -- nvmf/common.sh@296 -- # e810=() 00:10:42.955 17:14:51 -- nvmf/common.sh@296 -- # local -ga e810 00:10:42.955 17:14:51 -- nvmf/common.sh@297 -- # x722=() 00:10:42.955 17:14:51 -- nvmf/common.sh@297 -- # local -ga x722 00:10:42.955 17:14:51 -- nvmf/common.sh@298 -- # mlx=() 00:10:42.955 17:14:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:42.955 17:14:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:42.955 17:14:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:42.955 17:14:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:42.955 17:14:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:42.955 17:14:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:42.955 17:14:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:42.955 17:14:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:42.955 17:14:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:42.955 17:14:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:42.955 17:14:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:42.955 17:14:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:42.955 17:14:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:42.955 17:14:51 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:42.955 17:14:51 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:42.955 17:14:51 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:42.955 17:14:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:42.955 17:14:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.955 17:14:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:42.955 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:42.955 17:14:51 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:42.955 17:14:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.955 17:14:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:42.955 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:42.955 17:14:51 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:42.955 17:14:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:42.955 17:14:51 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.955 17:14:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.955 17:14:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:42.955 17:14:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.955 17:14:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:42.955 Found net devices under 0000:da:00.0: mlx_0_0 00:10:42.955 17:14:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.955 17:14:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.955 17:14:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.955 17:14:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:42.955 17:14:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.955 17:14:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:42.955 Found net devices under 0000:da:00.1: mlx_0_1 00:10:42.955 17:14:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.955 17:14:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:42.955 17:14:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:42.955 17:14:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:42.955 17:14:51 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:42.955 17:14:51 -- nvmf/common.sh@58 -- # uname 00:10:42.955 17:14:51 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:42.955 17:14:51 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:42.955 17:14:51 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:42.955 17:14:51 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:42.955 17:14:51 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:42.955 17:14:51 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:42.955 17:14:51 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:42.955 17:14:51 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:42.955 17:14:51 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:42.955 17:14:51 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:42.955 17:14:51 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:42.955 17:14:51 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:42.955 17:14:51 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:42.955 17:14:51 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:42.955 17:14:51 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:42.955 17:14:51 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:42.955 17:14:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:42.955 17:14:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.955 17:14:51 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:42.955 17:14:51 -- nvmf/common.sh@105 -- # continue 2 00:10:42.955 17:14:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:42.955 17:14:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.955 17:14:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.955 17:14:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:42.955 17:14:51 -- nvmf/common.sh@105 -- # continue 2 00:10:42.955 17:14:51 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:42.955 17:14:51 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:42.955 17:14:51 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:42.955 17:14:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:42.955 17:14:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:42.955 17:14:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:42.955 17:14:51 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:42.955 17:14:51 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:42.955 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:42.955 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:42.955 altname enp218s0f0np0 00:10:42.955 altname ens818f0np0 00:10:42.955 inet 192.168.100.8/24 scope global mlx_0_0 00:10:42.955 valid_lft forever preferred_lft forever 00:10:42.955 17:14:51 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:42.955 17:14:51 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:42.955 17:14:51 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:42.955 17:14:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:42.955 17:14:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:42.955 17:14:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:42.955 17:14:51 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:42.955 17:14:51 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:42.955 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:42.955 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:42.955 altname enp218s0f1np1 00:10:42.955 altname ens818f1np1 00:10:42.955 inet 192.168.100.9/24 scope global mlx_0_1 00:10:42.955 valid_lft forever preferred_lft forever 00:10:42.955 17:14:51 -- nvmf/common.sh@411 -- # return 0 00:10:42.955 17:14:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:42.955 17:14:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:42.955 17:14:51 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:42.955 17:14:51 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:42.955 17:14:51 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:42.955 17:14:51 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:42.956 17:14:51 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:42.956 17:14:51 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:42.956 17:14:51 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:42.956 17:14:51 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:42.956 17:14:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:42.956 17:14:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.956 17:14:51 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:42.956 17:14:51 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:42.956 17:14:51 -- nvmf/common.sh@105 -- # continue 2 00:10:42.956 17:14:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:42.956 17:14:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.956 17:14:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:42.956 17:14:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.956 17:14:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:42.956 17:14:51 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:42.956 17:14:51 -- nvmf/common.sh@105 -- # continue 2 00:10:42.956 17:14:51 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:42.956 17:14:51 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:42.956 17:14:51 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:42.956 17:14:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:42.956 17:14:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:42.956 17:14:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:42.956 17:14:51 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:42.956 17:14:51 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:42.956 17:14:51 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:42.956 17:14:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:42.956 17:14:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:42.956 17:14:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:42.956 17:14:51 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:42.956 192.168.100.9' 00:10:42.956 17:14:51 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:42.956 192.168.100.9' 00:10:42.956 17:14:51 -- nvmf/common.sh@446 -- # head -n 1 00:10:42.956 17:14:51 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:42.956 17:14:51 -- nvmf/common.sh@447 -- # tail -n +2 00:10:42.956 17:14:51 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:42.956 192.168.100.9' 00:10:42.956 17:14:51 -- nvmf/common.sh@447 -- # head -n 1 00:10:42.956 17:14:51 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:42.956 17:14:51 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:42.956 17:14:51 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:42.956 17:14:51 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:42.956 17:14:51 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:42.956 17:14:51 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:42.956 17:14:51 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:42.956 17:14:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:42.956 17:14:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:42.956 17:14:51 -- common/autotest_common.sh@10 -- # set +x 00:10:42.956 17:14:51 -- nvmf/common.sh@470 -- # nvmfpid=2990367 00:10:42.956 17:14:51 -- nvmf/common.sh@471 -- # waitforlisten 2990367 00:10:42.956 17:14:51 -- common/autotest_common.sh@817 -- # '[' -z 2990367 ']' 00:10:42.956 17:14:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.956 17:14:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:42.956 17:14:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.956 17:14:51 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:42.956 17:14:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:42.956 17:14:51 -- common/autotest_common.sh@10 -- # set +x 00:10:42.956 [2024-04-24 17:14:51.969000] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:10:42.956 [2024-04-24 17:14:51.969046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.956 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.956 [2024-04-24 17:14:52.025324] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.956 [2024-04-24 17:14:52.103345] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.956 [2024-04-24 17:14:52.103381] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.956 [2024-04-24 17:14:52.103388] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.956 [2024-04-24 17:14:52.103394] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.956 [2024-04-24 17:14:52.103399] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.956 [2024-04-24 17:14:52.103437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.956 [2024-04-24 17:14:52.103477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.956 [2024-04-24 17:14:52.103478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.956 [2024-04-24 17:14:52.103454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.524 17:14:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:43.524 17:14:52 -- common/autotest_common.sh@850 -- # return 0 00:10:43.525 17:14:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:43.525 17:14:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:43.525 17:14:52 -- common/autotest_common.sh@10 -- # set +x 00:10:43.783 17:14:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.783 17:14:52 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:43.783 17:14:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.783 17:14:52 -- common/autotest_common.sh@10 -- # set +x 00:10:43.783 [2024-04-24 17:14:52.833835] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12eaf60/0x12ef450) succeed. 00:10:43.783 [2024-04-24 17:14:52.843918] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12ec550/0x1330ae0) succeed. 00:10:43.783 17:14:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:43.783 17:14:52 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:43.783 17:14:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.783 17:14:52 -- common/autotest_common.sh@10 -- # set +x 00:10:43.783 Malloc0 00:10:43.783 17:14:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:43.783 17:14:52 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:43.783 17:14:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.783 17:14:52 -- common/autotest_common.sh@10 -- # set +x 00:10:43.783 Malloc1 00:10:43.783 17:14:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:43.784 17:14:53 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:43.784 17:14:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.784 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:10:43.784 17:14:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:43.784 17:14:53 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:43.784 17:14:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.784 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:10:43.784 17:14:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:43.784 17:14:53 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:43.784 17:14:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.784 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:10:44.042 17:14:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:44.042 17:14:53 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:44.042 17:14:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:44.042 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:10:44.042 [2024-04-24 17:14:53.039094] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:44.042 17:14:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:44.042 17:14:53 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:44.042 17:14:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:44.042 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:10:44.042 17:14:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:44.042 17:14:53 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:10:44.042 00:10:44.042 Discovery Log Number of Records 2, Generation counter 2 00:10:44.042 =====Discovery Log Entry 0====== 00:10:44.042 trtype: rdma 00:10:44.042 adrfam: ipv4 00:10:44.042 subtype: current discovery subsystem 00:10:44.042 treq: not required 00:10:44.042 portid: 0 00:10:44.042 trsvcid: 4420 00:10:44.042 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:44.042 traddr: 192.168.100.8 00:10:44.042 eflags: explicit discovery connections, duplicate discovery information 00:10:44.042 rdma_prtype: not specified 00:10:44.042 rdma_qptype: connected 00:10:44.043 rdma_cms: rdma-cm 00:10:44.043 rdma_pkey: 0x0000 00:10:44.043 =====Discovery Log Entry 1====== 00:10:44.043 trtype: rdma 00:10:44.043 adrfam: ipv4 00:10:44.043 subtype: nvme subsystem 00:10:44.043 treq: not required 00:10:44.043 portid: 0 00:10:44.043 trsvcid: 4420 00:10:44.043 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:44.043 traddr: 192.168.100.8 00:10:44.043 eflags: none 00:10:44.043 rdma_prtype: not specified 00:10:44.043 rdma_qptype: connected 00:10:44.043 rdma_cms: rdma-cm 00:10:44.043 rdma_pkey: 0x0000 00:10:44.043 17:14:53 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:44.043 17:14:53 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:44.043 17:14:53 -- nvmf/common.sh@511 -- # local dev _ 00:10:44.043 17:14:53 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:44.043 17:14:53 -- nvmf/common.sh@510 -- # nvme list 00:10:44.043 17:14:53 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:10:44.043 17:14:53 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:44.043 17:14:53 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:10:44.043 17:14:53 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:44.043 17:14:53 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:44.043 17:14:53 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:44.978 17:14:54 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:44.978 17:14:54 -- common/autotest_common.sh@1184 -- # local i=0 00:10:44.978 17:14:54 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.978 17:14:54 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:10:44.978 17:14:54 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:10:44.978 17:14:54 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:46.879 17:14:56 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:46.879 17:14:56 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:46.879 17:14:56 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.137 17:14:56 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:10:47.137 17:14:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.137 17:14:56 -- common/autotest_common.sh@1194 -- # return 0 00:10:47.137 17:14:56 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:47.137 17:14:56 -- nvmf/common.sh@511 -- # local dev _ 00:10:47.137 17:14:56 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:47.137 17:14:56 -- nvmf/common.sh@510 -- # nvme list 00:10:47.137 17:14:56 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:10:47.137 17:14:56 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:47.137 17:14:56 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:10:47.137 17:14:56 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:47.137 17:14:56 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:47.137 17:14:56 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:10:47.137 17:14:56 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:47.137 17:14:56 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:47.137 17:14:56 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:10:47.137 17:14:56 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:47.137 17:14:56 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:47.137 /dev/nvme0n1 ]] 00:10:47.137 17:14:56 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:47.137 17:14:56 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:47.137 17:14:56 -- nvmf/common.sh@511 -- # local dev _ 00:10:47.137 17:14:56 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:47.137 17:14:56 -- nvmf/common.sh@510 -- # nvme list 00:10:47.137 17:14:56 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:10:47.137 17:14:56 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:47.137 17:14:56 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:10:47.137 17:14:56 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:47.137 17:14:56 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:47.137 17:14:56 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:10:47.137 17:14:56 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:47.137 17:14:56 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:47.137 17:14:56 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:10:47.137 17:14:56 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:47.137 17:14:56 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:47.137 17:14:56 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.076 17:14:57 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:48.076 17:14:57 -- common/autotest_common.sh@1205 -- # local i=0 00:10:48.076 17:14:57 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:48.076 17:14:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.076 17:14:57 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:48.076 17:14:57 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.076 17:14:57 -- common/autotest_common.sh@1217 -- # return 0 00:10:48.076 17:14:57 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:48.076 17:14:57 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.076 17:14:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.076 17:14:57 -- common/autotest_common.sh@10 -- # set +x 00:10:48.076 17:14:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.076 17:14:57 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:48.076 17:14:57 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:48.076 17:14:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:48.076 17:14:57 -- nvmf/common.sh@117 -- # sync 00:10:48.076 17:14:57 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:48.076 17:14:57 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:48.076 17:14:57 -- nvmf/common.sh@120 -- # set +e 00:10:48.076 17:14:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:48.076 17:14:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:48.076 rmmod nvme_rdma 00:10:48.076 rmmod nvme_fabrics 00:10:48.076 17:14:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:48.076 17:14:57 -- nvmf/common.sh@124 -- # set -e 00:10:48.076 17:14:57 -- nvmf/common.sh@125 -- # return 0 00:10:48.076 17:14:57 -- nvmf/common.sh@478 -- # '[' -n 2990367 ']' 00:10:48.076 17:14:57 -- nvmf/common.sh@479 -- # killprocess 2990367 00:10:48.076 17:14:57 -- common/autotest_common.sh@936 -- # '[' -z 2990367 ']' 00:10:48.076 17:14:57 -- common/autotest_common.sh@940 -- # kill -0 2990367 00:10:48.076 17:14:57 -- common/autotest_common.sh@941 -- # uname 00:10:48.076 17:14:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:48.076 17:14:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2990367 00:10:48.076 17:14:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:48.076 17:14:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:48.076 17:14:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2990367' 00:10:48.076 killing process with pid 2990367 00:10:48.076 17:14:57 -- common/autotest_common.sh@955 -- # kill 2990367 00:10:48.076 17:14:57 -- common/autotest_common.sh@960 -- # wait 2990367 00:10:48.644 17:14:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:48.644 17:14:57 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:48.644 00:10:48.644 real 0m10.906s 00:10:48.644 user 0m23.318s 00:10:48.644 sys 0m4.353s 00:10:48.644 17:14:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:48.644 17:14:57 -- common/autotest_common.sh@10 -- # set +x 00:10:48.644 ************************************ 00:10:48.644 END TEST nvmf_nvme_cli 00:10:48.644 ************************************ 00:10:48.644 17:14:57 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:10:48.644 17:14:57 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:10:48.644 17:14:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:48.644 17:14:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:48.644 17:14:57 -- common/autotest_common.sh@10 -- # set +x 00:10:48.644 ************************************ 00:10:48.644 START TEST nvmf_host_management 00:10:48.644 ************************************ 00:10:48.644 17:14:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:10:48.644 * Looking for test storage... 00:10:48.644 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:48.644 17:14:57 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.644 17:14:57 -- nvmf/common.sh@7 -- # uname -s 00:10:48.644 17:14:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.644 17:14:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.644 17:14:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.644 17:14:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.644 17:14:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.645 17:14:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.645 17:14:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.645 17:14:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.645 17:14:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.645 17:14:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.645 17:14:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:48.645 17:14:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:48.645 17:14:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.645 17:14:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.645 17:14:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.645 17:14:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.645 17:14:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:48.645 17:14:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.645 17:14:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.645 17:14:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.645 17:14:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.645 17:14:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.645 17:14:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.645 17:14:57 -- paths/export.sh@5 -- # export PATH 00:10:48.645 17:14:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.645 17:14:57 -- nvmf/common.sh@47 -- # : 0 00:10:48.645 17:14:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:48.645 17:14:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:48.645 17:14:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.645 17:14:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.645 17:14:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.645 17:14:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:48.645 17:14:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:48.645 17:14:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:48.645 17:14:57 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:48.645 17:14:57 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:48.645 17:14:57 -- target/host_management.sh@105 -- # nvmftestinit 00:10:48.645 17:14:57 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:48.645 17:14:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.645 17:14:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:48.645 17:14:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:48.645 17:14:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:48.645 17:14:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.645 17:14:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:48.645 17:14:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.645 17:14:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:48.645 17:14:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:48.645 17:14:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:48.645 17:14:57 -- common/autotest_common.sh@10 -- # set +x 00:10:53.915 17:15:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:53.915 17:15:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:53.915 17:15:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:53.915 17:15:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:53.915 17:15:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:53.915 17:15:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:53.915 17:15:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:53.915 17:15:02 -- nvmf/common.sh@295 -- # net_devs=() 00:10:53.915 17:15:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:53.915 17:15:02 -- nvmf/common.sh@296 -- # e810=() 00:10:53.915 17:15:02 -- nvmf/common.sh@296 -- # local -ga e810 00:10:53.915 17:15:02 -- nvmf/common.sh@297 -- # x722=() 00:10:53.915 17:15:02 -- nvmf/common.sh@297 -- # local -ga x722 00:10:53.915 17:15:02 -- nvmf/common.sh@298 -- # mlx=() 00:10:53.915 17:15:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:53.915 17:15:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.915 17:15:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.915 17:15:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.915 17:15:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.915 17:15:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.915 17:15:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.916 17:15:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.916 17:15:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.916 17:15:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.916 17:15:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.916 17:15:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.916 17:15:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:53.916 17:15:02 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:53.916 17:15:02 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:53.916 17:15:02 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:53.916 17:15:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:53.916 17:15:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.916 17:15:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:53.916 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:53.916 17:15:02 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:53.916 17:15:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.916 17:15:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:53.916 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:53.916 17:15:02 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:53.916 17:15:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:53.916 17:15:02 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.916 17:15:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.916 17:15:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:53.916 17:15:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.916 17:15:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:53.916 Found net devices under 0000:da:00.0: mlx_0_0 00:10:53.916 17:15:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.916 17:15:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.916 17:15:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.916 17:15:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:53.916 17:15:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.916 17:15:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:53.916 Found net devices under 0000:da:00.1: mlx_0_1 00:10:53.916 17:15:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.916 17:15:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:53.916 17:15:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:53.916 17:15:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:53.916 17:15:02 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:53.916 17:15:02 -- nvmf/common.sh@58 -- # uname 00:10:53.916 17:15:02 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:53.916 17:15:02 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:53.916 17:15:02 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:53.916 17:15:02 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:53.916 17:15:02 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:53.916 17:15:02 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:53.916 17:15:02 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:53.916 17:15:02 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:53.916 17:15:02 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:53.916 17:15:02 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:53.916 17:15:02 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:53.916 17:15:02 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:53.916 17:15:02 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:53.916 17:15:02 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:53.916 17:15:02 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.916 17:15:02 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:53.916 17:15:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:53.916 17:15:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.916 17:15:02 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:53.916 17:15:02 -- nvmf/common.sh@105 -- # continue 2 00:10:53.916 17:15:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:53.916 17:15:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.916 17:15:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.916 17:15:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:53.916 17:15:02 -- nvmf/common.sh@105 -- # continue 2 00:10:53.916 17:15:02 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:53.916 17:15:02 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:53.916 17:15:02 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:53.916 17:15:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:53.916 17:15:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:53.916 17:15:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:53.916 17:15:02 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:53.916 17:15:02 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:53.916 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:53.916 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:53.916 altname enp218s0f0np0 00:10:53.916 altname ens818f0np0 00:10:53.916 inet 192.168.100.8/24 scope global mlx_0_0 00:10:53.916 valid_lft forever preferred_lft forever 00:10:53.916 17:15:02 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:53.916 17:15:02 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:53.916 17:15:02 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:53.916 17:15:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:53.916 17:15:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:53.916 17:15:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:53.916 17:15:02 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:53.916 17:15:02 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:53.916 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:53.916 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:53.916 altname enp218s0f1np1 00:10:53.916 altname ens818f1np1 00:10:53.916 inet 192.168.100.9/24 scope global mlx_0_1 00:10:53.916 valid_lft forever preferred_lft forever 00:10:53.916 17:15:02 -- nvmf/common.sh@411 -- # return 0 00:10:53.916 17:15:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:53.916 17:15:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:53.916 17:15:02 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:53.916 17:15:02 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:53.916 17:15:02 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:53.916 17:15:02 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:53.916 17:15:02 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:53.916 17:15:02 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.916 17:15:02 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:53.916 17:15:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:53.916 17:15:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.916 17:15:02 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:53.916 17:15:02 -- nvmf/common.sh@105 -- # continue 2 00:10:53.916 17:15:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:53.916 17:15:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.916 17:15:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.916 17:15:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:53.916 17:15:02 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:53.916 17:15:02 -- nvmf/common.sh@105 -- # continue 2 00:10:53.916 17:15:02 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:53.916 17:15:02 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:53.917 17:15:02 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:53.917 17:15:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:53.917 17:15:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:53.917 17:15:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:53.917 17:15:02 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:53.917 17:15:02 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:53.917 17:15:02 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:53.917 17:15:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:53.917 17:15:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:53.917 17:15:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:53.917 17:15:02 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:53.917 192.168.100.9' 00:10:53.917 17:15:02 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:53.917 192.168.100.9' 00:10:53.917 17:15:02 -- nvmf/common.sh@446 -- # head -n 1 00:10:53.917 17:15:02 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:53.917 17:15:02 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:53.917 192.168.100.9' 00:10:53.917 17:15:02 -- nvmf/common.sh@447 -- # tail -n +2 00:10:53.917 17:15:02 -- nvmf/common.sh@447 -- # head -n 1 00:10:53.917 17:15:02 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:53.917 17:15:02 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:53.917 17:15:02 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:53.917 17:15:02 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:53.917 17:15:02 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:53.917 17:15:02 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:53.917 17:15:02 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:10:53.917 17:15:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:53.917 17:15:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.917 17:15:02 -- common/autotest_common.sh@10 -- # set +x 00:10:53.917 ************************************ 00:10:53.917 START TEST nvmf_host_management 00:10:53.917 ************************************ 00:10:53.917 17:15:03 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:10:53.917 17:15:03 -- target/host_management.sh@69 -- # starttarget 00:10:53.917 17:15:03 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:53.917 17:15:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:53.917 17:15:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:53.917 17:15:03 -- common/autotest_common.sh@10 -- # set +x 00:10:53.917 17:15:03 -- nvmf/common.sh@470 -- # nvmfpid=2992832 00:10:53.917 17:15:03 -- nvmf/common.sh@471 -- # waitforlisten 2992832 00:10:53.917 17:15:03 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:53.917 17:15:03 -- common/autotest_common.sh@817 -- # '[' -z 2992832 ']' 00:10:53.917 17:15:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.917 17:15:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:53.917 17:15:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.917 17:15:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:53.917 17:15:03 -- common/autotest_common.sh@10 -- # set +x 00:10:53.917 [2024-04-24 17:15:03.069049] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:10:53.917 [2024-04-24 17:15:03.069094] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.917 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.917 [2024-04-24 17:15:03.125464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.176 [2024-04-24 17:15:03.200331] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.176 [2024-04-24 17:15:03.200371] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.176 [2024-04-24 17:15:03.200377] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.176 [2024-04-24 17:15:03.200383] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.176 [2024-04-24 17:15:03.200388] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.176 [2024-04-24 17:15:03.200424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.176 [2024-04-24 17:15:03.200514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.176 [2024-04-24 17:15:03.200602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.176 [2024-04-24 17:15:03.200603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:54.744 17:15:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:54.744 17:15:03 -- common/autotest_common.sh@850 -- # return 0 00:10:54.744 17:15:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:54.744 17:15:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:54.744 17:15:03 -- common/autotest_common.sh@10 -- # set +x 00:10:54.744 17:15:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.744 17:15:03 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:54.744 17:15:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:54.744 17:15:03 -- common/autotest_common.sh@10 -- # set +x 00:10:54.744 [2024-04-24 17:15:03.935872] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2289250/0x228d740) succeed. 00:10:54.744 [2024-04-24 17:15:03.946079] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x228a840/0x22cedd0) succeed. 00:10:55.003 17:15:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.003 17:15:04 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:55.003 17:15:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:55.003 17:15:04 -- common/autotest_common.sh@10 -- # set +x 00:10:55.003 17:15:04 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:55.003 17:15:04 -- target/host_management.sh@23 -- # cat 00:10:55.003 17:15:04 -- target/host_management.sh@30 -- # rpc_cmd 00:10:55.003 17:15:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.003 17:15:04 -- common/autotest_common.sh@10 -- # set +x 00:10:55.003 Malloc0 00:10:55.003 [2024-04-24 17:15:04.120314] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:55.003 17:15:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.003 17:15:04 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:55.003 17:15:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:55.003 17:15:04 -- common/autotest_common.sh@10 -- # set +x 00:10:55.003 17:15:04 -- target/host_management.sh@73 -- # perfpid=2992891 00:10:55.003 17:15:04 -- target/host_management.sh@74 -- # waitforlisten 2992891 /var/tmp/bdevperf.sock 00:10:55.003 17:15:04 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:55.003 17:15:04 -- common/autotest_common.sh@817 -- # '[' -z 2992891 ']' 00:10:55.003 17:15:04 -- nvmf/common.sh@521 -- # config=() 00:10:55.003 17:15:04 -- nvmf/common.sh@521 -- # local subsystem config 00:10:55.003 17:15:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:55.003 17:15:04 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:55.003 17:15:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:55.003 17:15:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:55.003 { 00:10:55.003 "params": { 00:10:55.003 "name": "Nvme$subsystem", 00:10:55.003 "trtype": "$TEST_TRANSPORT", 00:10:55.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:55.003 "adrfam": "ipv4", 00:10:55.003 "trsvcid": "$NVMF_PORT", 00:10:55.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:55.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:55.003 "hdgst": ${hdgst:-false}, 00:10:55.003 "ddgst": ${ddgst:-false} 00:10:55.003 }, 00:10:55.003 "method": "bdev_nvme_attach_controller" 00:10:55.003 } 00:10:55.003 EOF 00:10:55.003 )") 00:10:55.003 17:15:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:55.003 17:15:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:55.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:55.003 17:15:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:55.003 17:15:04 -- common/autotest_common.sh@10 -- # set +x 00:10:55.003 17:15:04 -- nvmf/common.sh@543 -- # cat 00:10:55.003 17:15:04 -- nvmf/common.sh@545 -- # jq . 00:10:55.003 17:15:04 -- nvmf/common.sh@546 -- # IFS=, 00:10:55.003 17:15:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:55.003 "params": { 00:10:55.003 "name": "Nvme0", 00:10:55.003 "trtype": "rdma", 00:10:55.003 "traddr": "192.168.100.8", 00:10:55.003 "adrfam": "ipv4", 00:10:55.003 "trsvcid": "4420", 00:10:55.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:55.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:55.004 "hdgst": false, 00:10:55.004 "ddgst": false 00:10:55.004 }, 00:10:55.004 "method": "bdev_nvme_attach_controller" 00:10:55.004 }' 00:10:55.004 [2024-04-24 17:15:04.209768] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:10:55.004 [2024-04-24 17:15:04.209811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2992891 ] 00:10:55.004 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.263 [2024-04-24 17:15:04.264829] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.263 [2024-04-24 17:15:04.337930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.521 Running I/O for 10 seconds... 00:10:55.780 17:15:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:55.780 17:15:05 -- common/autotest_common.sh@850 -- # return 0 00:10:55.780 17:15:05 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:55.780 17:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.780 17:15:05 -- common/autotest_common.sh@10 -- # set +x 00:10:56.039 17:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.039 17:15:05 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:56.039 17:15:05 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:56.039 17:15:05 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:56.039 17:15:05 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:56.039 17:15:05 -- target/host_management.sh@52 -- # local ret=1 00:10:56.039 17:15:05 -- target/host_management.sh@53 -- # local i 00:10:56.039 17:15:05 -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:56.039 17:15:05 -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:56.039 17:15:05 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:56.039 17:15:05 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:56.039 17:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.039 17:15:05 -- common/autotest_common.sh@10 -- # set +x 00:10:56.039 17:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.039 17:15:05 -- target/host_management.sh@55 -- # read_io_count=1579 00:10:56.039 17:15:05 -- target/host_management.sh@58 -- # '[' 1579 -ge 100 ']' 00:10:56.039 17:15:05 -- target/host_management.sh@59 -- # ret=0 00:10:56.039 17:15:05 -- target/host_management.sh@60 -- # break 00:10:56.039 17:15:05 -- target/host_management.sh@64 -- # return 0 00:10:56.039 17:15:05 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:56.039 17:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.039 17:15:05 -- common/autotest_common.sh@10 -- # set +x 00:10:56.039 17:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.039 17:15:05 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:56.039 17:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.039 17:15:05 -- common/autotest_common.sh@10 -- # set +x 00:10:56.039 17:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.039 17:15:05 -- target/host_management.sh@87 -- # sleep 1 00:10:56.978 [2024-04-24 17:15:06.109010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.978 [2024-04-24 17:15:06.109042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:4e5cdd70 sqhd:0000 p:0 m:0 dnr:0 00:10:56.978 [2024-04-24 17:15:06.109052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.978 [2024-04-24 17:15:06.109059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:4e5cdd70 sqhd:0000 p:0 m:0 dnr:0 00:10:56.978 [2024-04-24 17:15:06.109066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.978 [2024-04-24 17:15:06.109072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:4e5cdd70 sqhd:0000 p:0 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.109079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.979 [2024-04-24 17:15:06.109085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:4e5cdd70 sqhd:0000 p:0 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:10:56.979 [2024-04-24 17:15:06.111093] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:10:56.979 [2024-04-24 17:15:06.111109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182400 00:10:56.979 [2024-04-24 17:15:06.111117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182400 00:10:56.979 [2024-04-24 17:15:06.111143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182400 00:10:56.979 [2024-04-24 17:15:06.111160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182400 00:10:56.979 [2024-04-24 17:15:06.111177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182400 00:10:56.979 [2024-04-24 17:15:06.111194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182400 00:10:56.979 [2024-04-24 17:15:06.111215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182400 00:10:56.979 [2024-04-24 17:15:06.111232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182400 00:10:56.979 [2024-04-24 17:15:06.111248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182400 00:10:56.979 [2024-04-24 17:15:06.111264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182400 00:10:56.979 [2024-04-24 17:15:06.111281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182400 00:10:56.979 [2024-04-24 17:15:06.111299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182400 00:10:56.979 [2024-04-24 17:15:06.111315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182400 00:10:56.979 [2024-04-24 17:15:06.111332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182400 00:10:56.979 [2024-04-24 17:15:06.111349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182700 00:10:56.979 [2024-04-24 17:15:06.111366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182700 00:10:56.979 [2024-04-24 17:15:06.111382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182700 00:10:56.979 [2024-04-24 17:15:06.111398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182700 00:10:56.979 [2024-04-24 17:15:06.111417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182700 00:10:56.979 [2024-04-24 17:15:06.111433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182700 00:10:56.979 [2024-04-24 17:15:06.111450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182700 00:10:56.979 [2024-04-24 17:15:06.111466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182700 00:10:56.979 [2024-04-24 17:15:06.111482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182700 00:10:56.979 [2024-04-24 17:15:06.111499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182700 00:10:56.979 [2024-04-24 17:15:06.111515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182700 00:10:56.979 [2024-04-24 17:15:06.111531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182700 00:10:56.979 [2024-04-24 17:15:06.111548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182700 00:10:56.979 [2024-04-24 17:15:06.111564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182700 00:10:56.979 [2024-04-24 17:15:06.111580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182600 00:10:56.979 [2024-04-24 17:15:06.111598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.979 [2024-04-24 17:15:06.111608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182600 00:10:56.979 [2024-04-24 17:15:06.111614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182600 00:10:56.980 [2024-04-24 17:15:06.111631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182600 00:10:56.980 [2024-04-24 17:15:06.111647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182600 00:10:56.980 [2024-04-24 17:15:06.111663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182600 00:10:56.980 [2024-04-24 17:15:06.111679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182600 00:10:56.980 [2024-04-24 17:15:06.111696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182600 00:10:56.980 [2024-04-24 17:15:06.111712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182600 00:10:56.980 [2024-04-24 17:15:06.111728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182600 00:10:56.980 [2024-04-24 17:15:06.111745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182600 00:10:56.980 [2024-04-24 17:15:06.111762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182600 00:10:56.980 [2024-04-24 17:15:06.111780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab4000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.111797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad5000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.111814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be2f000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.111838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be0e000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.111855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bded000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.111872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdab000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.111891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd8a000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.111908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd69000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.111925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df2f000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.111942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df0e000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.111960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000deed000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.111978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.111989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000decc000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.111995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.112006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000deab000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.112012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.112023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de8a000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.112030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.112040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de69000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.112047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.112058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de48000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.112064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.112075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de27000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.112081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.112092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de06000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.112098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.112109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dde5000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.112116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.112127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ddc4000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.112134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.112144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dda3000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.112151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.112161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd82000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.112169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.112179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd61000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.112186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 [2024-04-24 17:15:06.112196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd40000 len:0x10000 key:0x182300 00:10:56.980 [2024-04-24 17:15:06.112203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:42384 cdw0:192ed080 sqhd:6e00 p:1 m:0 dnr:0 00:10:56.980 17:15:06 -- target/host_management.sh@91 -- # kill -9 2992891 00:10:56.980 17:15:06 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:56.981 17:15:06 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:56.981 17:15:06 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:56.981 17:15:06 -- nvmf/common.sh@521 -- # config=() 00:10:56.981 17:15:06 -- nvmf/common.sh@521 -- # local subsystem config 00:10:56.981 17:15:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:56.981 17:15:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:56.981 { 00:10:56.981 "params": { 00:10:56.981 "name": "Nvme$subsystem", 00:10:56.981 "trtype": "$TEST_TRANSPORT", 00:10:56.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:56.981 "adrfam": "ipv4", 00:10:56.981 "trsvcid": "$NVMF_PORT", 00:10:56.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:56.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:56.981 "hdgst": ${hdgst:-false}, 00:10:56.981 "ddgst": ${ddgst:-false} 00:10:56.981 }, 00:10:56.981 "method": "bdev_nvme_attach_controller" 00:10:56.981 } 00:10:56.981 EOF 00:10:56.981 )") 00:10:56.981 17:15:06 -- nvmf/common.sh@543 -- # cat 00:10:56.981 17:15:06 -- nvmf/common.sh@545 -- # jq . 00:10:56.981 17:15:06 -- nvmf/common.sh@546 -- # IFS=, 00:10:56.981 17:15:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:56.981 "params": { 00:10:56.981 "name": "Nvme0", 00:10:56.981 "trtype": "rdma", 00:10:56.981 "traddr": "192.168.100.8", 00:10:56.981 "adrfam": "ipv4", 00:10:56.981 "trsvcid": "4420", 00:10:56.981 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:56.981 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:56.981 "hdgst": false, 00:10:56.981 "ddgst": false 00:10:56.981 }, 00:10:56.981 "method": "bdev_nvme_attach_controller" 00:10:56.981 }' 00:10:56.981 [2024-04-24 17:15:06.159655] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:10:56.981 [2024-04-24 17:15:06.159702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2992930 ] 00:10:56.981 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.981 [2024-04-24 17:15:06.215375] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.240 [2024-04-24 17:15:06.289493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.240 Running I/O for 1 seconds... 00:10:58.617 00:10:58.617 Latency(us) 00:10:58.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.617 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:58.617 Verification LBA range: start 0x0 length 0x400 00:10:58.617 Nvme0n1 : 1.00 3040.43 190.03 0.00 0.00 20615.03 885.52 43191.34 00:10:58.617 =================================================================================================================== 00:10:58.617 Total : 3040.43 190.03 0.00 0.00 20615.03 885.52 43191.34 00:10:58.617 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2992891 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:10:58.617 17:15:07 -- target/host_management.sh@102 -- # stoptarget 00:10:58.617 17:15:07 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:58.617 17:15:07 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:58.617 17:15:07 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:58.617 17:15:07 -- target/host_management.sh@40 -- # nvmftestfini 00:10:58.617 17:15:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:58.617 17:15:07 -- nvmf/common.sh@117 -- # sync 00:10:58.617 17:15:07 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:58.617 17:15:07 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:58.617 17:15:07 -- nvmf/common.sh@120 -- # set +e 00:10:58.617 17:15:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:58.617 17:15:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:58.617 rmmod nvme_rdma 00:10:58.617 rmmod nvme_fabrics 00:10:58.617 17:15:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:58.617 17:15:07 -- nvmf/common.sh@124 -- # set -e 00:10:58.617 17:15:07 -- nvmf/common.sh@125 -- # return 0 00:10:58.617 17:15:07 -- nvmf/common.sh@478 -- # '[' -n 2992832 ']' 00:10:58.617 17:15:07 -- nvmf/common.sh@479 -- # killprocess 2992832 00:10:58.617 17:15:07 -- common/autotest_common.sh@936 -- # '[' -z 2992832 ']' 00:10:58.617 17:15:07 -- common/autotest_common.sh@940 -- # kill -0 2992832 00:10:58.617 17:15:07 -- common/autotest_common.sh@941 -- # uname 00:10:58.617 17:15:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:58.617 17:15:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2992832 00:10:58.617 17:15:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:58.617 17:15:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:58.617 17:15:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2992832' 00:10:58.617 killing process with pid 2992832 00:10:58.617 17:15:07 -- common/autotest_common.sh@955 -- # kill 2992832 00:10:58.617 17:15:07 -- common/autotest_common.sh@960 -- # wait 2992832 00:10:58.877 [2024-04-24 17:15:08.079906] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:58.877 17:15:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:58.877 17:15:08 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:58.877 00:10:58.877 real 0m5.083s 00:10:58.877 user 0m22.816s 00:10:58.877 sys 0m0.863s 00:10:58.877 17:15:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:58.877 17:15:08 -- common/autotest_common.sh@10 -- # set +x 00:10:58.877 ************************************ 00:10:58.877 END TEST nvmf_host_management 00:10:58.877 ************************************ 00:10:59.137 17:15:08 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:59.137 00:10:59.137 real 0m10.395s 00:10:59.137 user 0m24.362s 00:10:59.137 sys 0m4.782s 00:10:59.137 17:15:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:59.137 17:15:08 -- common/autotest_common.sh@10 -- # set +x 00:10:59.137 ************************************ 00:10:59.137 END TEST nvmf_host_management 00:10:59.137 ************************************ 00:10:59.137 17:15:08 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:10:59.137 17:15:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:59.137 17:15:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:59.137 17:15:08 -- common/autotest_common.sh@10 -- # set +x 00:10:59.137 ************************************ 00:10:59.137 START TEST nvmf_lvol 00:10:59.137 ************************************ 00:10:59.137 17:15:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:10:59.137 * Looking for test storage... 00:10:59.137 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:59.137 17:15:08 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.137 17:15:08 -- nvmf/common.sh@7 -- # uname -s 00:10:59.137 17:15:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.137 17:15:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.137 17:15:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.137 17:15:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.137 17:15:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.137 17:15:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.137 17:15:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.137 17:15:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.137 17:15:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.137 17:15:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.137 17:15:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:59.137 17:15:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:59.137 17:15:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.137 17:15:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.137 17:15:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.137 17:15:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.137 17:15:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:59.137 17:15:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.137 17:15:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.137 17:15:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.137 17:15:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.137 17:15:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.137 17:15:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.137 17:15:08 -- paths/export.sh@5 -- # export PATH 00:10:59.137 17:15:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.137 17:15:08 -- nvmf/common.sh@47 -- # : 0 00:10:59.137 17:15:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.137 17:15:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.137 17:15:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.137 17:15:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.137 17:15:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.137 17:15:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.137 17:15:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.137 17:15:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.137 17:15:08 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.137 17:15:08 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:59.137 17:15:08 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:59.137 17:15:08 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:59.137 17:15:08 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:59.137 17:15:08 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:59.137 17:15:08 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:59.137 17:15:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.137 17:15:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:59.137 17:15:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:59.137 17:15:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:59.137 17:15:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.137 17:15:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.137 17:15:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.137 17:15:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:59.137 17:15:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:59.137 17:15:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.137 17:15:08 -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 17:15:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:04.408 17:15:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:04.408 17:15:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:04.408 17:15:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:04.408 17:15:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:04.408 17:15:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:04.408 17:15:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:04.408 17:15:13 -- nvmf/common.sh@295 -- # net_devs=() 00:11:04.408 17:15:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:04.408 17:15:13 -- nvmf/common.sh@296 -- # e810=() 00:11:04.408 17:15:13 -- nvmf/common.sh@296 -- # local -ga e810 00:11:04.408 17:15:13 -- nvmf/common.sh@297 -- # x722=() 00:11:04.408 17:15:13 -- nvmf/common.sh@297 -- # local -ga x722 00:11:04.408 17:15:13 -- nvmf/common.sh@298 -- # mlx=() 00:11:04.408 17:15:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:04.408 17:15:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.408 17:15:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.408 17:15:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.408 17:15:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.408 17:15:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.408 17:15:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.408 17:15:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.408 17:15:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.408 17:15:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.408 17:15:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.408 17:15:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.408 17:15:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:04.408 17:15:13 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:04.408 17:15:13 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:04.408 17:15:13 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:04.408 17:15:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:04.408 17:15:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.408 17:15:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:04.408 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:04.408 17:15:13 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:04.408 17:15:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.408 17:15:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:04.408 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:04.408 17:15:13 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:04.408 17:15:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:04.408 17:15:13 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.408 17:15:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.408 17:15:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:04.408 17:15:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.408 17:15:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:04.408 Found net devices under 0000:da:00.0: mlx_0_0 00:11:04.408 17:15:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.408 17:15:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.408 17:15:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.408 17:15:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:04.408 17:15:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.408 17:15:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:04.408 Found net devices under 0000:da:00.1: mlx_0_1 00:11:04.408 17:15:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.408 17:15:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:04.408 17:15:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:04.408 17:15:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:04.408 17:15:13 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:04.408 17:15:13 -- nvmf/common.sh@58 -- # uname 00:11:04.408 17:15:13 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:04.408 17:15:13 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:04.408 17:15:13 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:04.408 17:15:13 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:04.408 17:15:13 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:04.408 17:15:13 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:04.408 17:15:13 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:04.408 17:15:13 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:04.408 17:15:13 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:04.408 17:15:13 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:04.408 17:15:13 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:04.408 17:15:13 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:04.408 17:15:13 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:04.408 17:15:13 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:04.408 17:15:13 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:04.408 17:15:13 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:04.408 17:15:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:04.408 17:15:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.408 17:15:13 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:04.408 17:15:13 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:04.409 17:15:13 -- nvmf/common.sh@105 -- # continue 2 00:11:04.409 17:15:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:04.409 17:15:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.409 17:15:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:04.409 17:15:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.409 17:15:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:04.409 17:15:13 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:04.409 17:15:13 -- nvmf/common.sh@105 -- # continue 2 00:11:04.409 17:15:13 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:04.409 17:15:13 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:04.409 17:15:13 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:04.409 17:15:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:04.409 17:15:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:04.409 17:15:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:04.409 17:15:13 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:04.409 17:15:13 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:04.409 17:15:13 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:04.409 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:04.409 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:04.409 altname enp218s0f0np0 00:11:04.409 altname ens818f0np0 00:11:04.409 inet 192.168.100.8/24 scope global mlx_0_0 00:11:04.409 valid_lft forever preferred_lft forever 00:11:04.409 17:15:13 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:04.409 17:15:13 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:04.409 17:15:13 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:04.409 17:15:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:04.409 17:15:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:04.409 17:15:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:04.409 17:15:13 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:04.409 17:15:13 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:04.409 17:15:13 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:04.409 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:04.409 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:04.409 altname enp218s0f1np1 00:11:04.409 altname ens818f1np1 00:11:04.409 inet 192.168.100.9/24 scope global mlx_0_1 00:11:04.409 valid_lft forever preferred_lft forever 00:11:04.409 17:15:13 -- nvmf/common.sh@411 -- # return 0 00:11:04.409 17:15:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:04.409 17:15:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:04.409 17:15:13 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:04.409 17:15:13 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:04.409 17:15:13 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:04.409 17:15:13 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:04.409 17:15:13 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:04.409 17:15:13 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:04.409 17:15:13 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:04.409 17:15:13 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:04.409 17:15:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:04.409 17:15:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.409 17:15:13 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:04.409 17:15:13 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:04.409 17:15:13 -- nvmf/common.sh@105 -- # continue 2 00:11:04.409 17:15:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:04.409 17:15:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.409 17:15:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:04.409 17:15:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.409 17:15:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:04.409 17:15:13 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:04.409 17:15:13 -- nvmf/common.sh@105 -- # continue 2 00:11:04.409 17:15:13 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:04.409 17:15:13 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:04.409 17:15:13 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:04.409 17:15:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:04.409 17:15:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:04.409 17:15:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:04.409 17:15:13 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:04.409 17:15:13 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:04.409 17:15:13 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:04.409 17:15:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:04.409 17:15:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:04.409 17:15:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:04.409 17:15:13 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:04.409 192.168.100.9' 00:11:04.409 17:15:13 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:04.409 192.168.100.9' 00:11:04.409 17:15:13 -- nvmf/common.sh@446 -- # head -n 1 00:11:04.409 17:15:13 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:04.409 17:15:13 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:04.409 192.168.100.9' 00:11:04.409 17:15:13 -- nvmf/common.sh@447 -- # tail -n +2 00:11:04.409 17:15:13 -- nvmf/common.sh@447 -- # head -n 1 00:11:04.409 17:15:13 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:04.409 17:15:13 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:04.409 17:15:13 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:04.409 17:15:13 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:04.409 17:15:13 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:04.409 17:15:13 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:04.409 17:15:13 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:04.409 17:15:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:04.409 17:15:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:04.409 17:15:13 -- common/autotest_common.sh@10 -- # set +x 00:11:04.409 17:15:13 -- nvmf/common.sh@470 -- # nvmfpid=2995582 00:11:04.409 17:15:13 -- nvmf/common.sh@471 -- # waitforlisten 2995582 00:11:04.409 17:15:13 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:04.409 17:15:13 -- common/autotest_common.sh@817 -- # '[' -z 2995582 ']' 00:11:04.409 17:15:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.409 17:15:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:04.409 17:15:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.409 17:15:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:04.409 17:15:13 -- common/autotest_common.sh@10 -- # set +x 00:11:04.409 [2024-04-24 17:15:13.395000] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:04.409 [2024-04-24 17:15:13.395052] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.409 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.409 [2024-04-24 17:15:13.452225] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:04.409 [2024-04-24 17:15:13.526207] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.409 [2024-04-24 17:15:13.526249] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.409 [2024-04-24 17:15:13.526256] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.409 [2024-04-24 17:15:13.526262] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.409 [2024-04-24 17:15:13.526267] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.409 [2024-04-24 17:15:13.526355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.409 [2024-04-24 17:15:13.526450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.409 [2024-04-24 17:15:13.526452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.977 17:15:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:04.977 17:15:14 -- common/autotest_common.sh@850 -- # return 0 00:11:04.977 17:15:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:04.977 17:15:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:04.977 17:15:14 -- common/autotest_common.sh@10 -- # set +x 00:11:05.235 17:15:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.235 17:15:14 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:05.235 [2024-04-24 17:15:14.396065] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x198c380/0x1990870) succeed. 00:11:05.235 [2024-04-24 17:15:14.406301] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x198d8d0/0x19d1f00) succeed. 00:11:05.494 17:15:14 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.494 17:15:14 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:05.494 17:15:14 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.753 17:15:14 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:05.753 17:15:14 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:06.012 17:15:15 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:06.012 17:15:15 -- target/nvmf_lvol.sh@29 -- # lvs=f7092a2c-cf62-42ee-9d52-31f6db229095 00:11:06.270 17:15:15 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f7092a2c-cf62-42ee-9d52-31f6db229095 lvol 20 00:11:06.270 17:15:15 -- target/nvmf_lvol.sh@32 -- # lvol=f21b2024-741e-46d6-b995-dcd472793649 00:11:06.270 17:15:15 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:06.529 17:15:15 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f21b2024-741e-46d6-b995-dcd472793649 00:11:06.787 17:15:15 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:06.787 [2024-04-24 17:15:15.930709] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:06.787 17:15:15 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:07.045 17:15:16 -- target/nvmf_lvol.sh@42 -- # perf_pid=2995649 00:11:07.045 17:15:16 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:07.045 17:15:16 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:07.045 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.982 17:15:17 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f21b2024-741e-46d6-b995-dcd472793649 MY_SNAPSHOT 00:11:08.240 17:15:17 -- target/nvmf_lvol.sh@47 -- # snapshot=4c634cbb-4393-4cdb-bca9-337fa31fc78d 00:11:08.240 17:15:17 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f21b2024-741e-46d6-b995-dcd472793649 30 00:11:08.499 17:15:17 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4c634cbb-4393-4cdb-bca9-337fa31fc78d MY_CLONE 00:11:08.499 17:15:17 -- target/nvmf_lvol.sh@49 -- # clone=ec5993a6-49dc-43f2-8bb9-70e3f61b6da5 00:11:08.499 17:15:17 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ec5993a6-49dc-43f2-8bb9-70e3f61b6da5 00:11:08.758 17:15:17 -- target/nvmf_lvol.sh@53 -- # wait 2995649 00:11:18.734 Initializing NVMe Controllers 00:11:18.734 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:11:18.734 Controller IO queue size 128, less than required. 00:11:18.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:18.734 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:18.734 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:18.734 Initialization complete. Launching workers. 00:11:18.734 ======================================================== 00:11:18.734 Latency(us) 00:11:18.734 Device Information : IOPS MiB/s Average min max 00:11:18.734 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16781.40 65.55 7628.98 2354.34 51089.63 00:11:18.734 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16726.80 65.34 7653.84 3566.16 47073.63 00:11:18.734 ======================================================== 00:11:18.734 Total : 33508.20 130.89 7641.39 2354.34 51089.63 00:11:18.734 00:11:18.734 17:15:27 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:18.734 17:15:27 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f21b2024-741e-46d6-b995-dcd472793649 00:11:18.734 17:15:27 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f7092a2c-cf62-42ee-9d52-31f6db229095 00:11:18.992 17:15:28 -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:18.992 17:15:28 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:18.992 17:15:28 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:18.992 17:15:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:18.992 17:15:28 -- nvmf/common.sh@117 -- # sync 00:11:18.992 17:15:28 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:18.992 17:15:28 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:18.992 17:15:28 -- nvmf/common.sh@120 -- # set +e 00:11:18.992 17:15:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:18.992 17:15:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:18.992 rmmod nvme_rdma 00:11:18.992 rmmod nvme_fabrics 00:11:18.992 17:15:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:18.992 17:15:28 -- nvmf/common.sh@124 -- # set -e 00:11:18.992 17:15:28 -- nvmf/common.sh@125 -- # return 0 00:11:18.992 17:15:28 -- nvmf/common.sh@478 -- # '[' -n 2995582 ']' 00:11:18.992 17:15:28 -- nvmf/common.sh@479 -- # killprocess 2995582 00:11:18.992 17:15:28 -- common/autotest_common.sh@936 -- # '[' -z 2995582 ']' 00:11:18.992 17:15:28 -- common/autotest_common.sh@940 -- # kill -0 2995582 00:11:18.992 17:15:28 -- common/autotest_common.sh@941 -- # uname 00:11:18.992 17:15:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:18.992 17:15:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2995582 00:11:18.992 17:15:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:18.992 17:15:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:18.992 17:15:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2995582' 00:11:18.992 killing process with pid 2995582 00:11:18.992 17:15:28 -- common/autotest_common.sh@955 -- # kill 2995582 00:11:18.992 17:15:28 -- common/autotest_common.sh@960 -- # wait 2995582 00:11:19.252 17:15:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:19.252 17:15:28 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:19.252 00:11:19.252 real 0m20.197s 00:11:19.252 user 1m10.544s 00:11:19.252 sys 0m4.812s 00:11:19.252 17:15:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:19.252 17:15:28 -- common/autotest_common.sh@10 -- # set +x 00:11:19.252 ************************************ 00:11:19.252 END TEST nvmf_lvol 00:11:19.252 ************************************ 00:11:19.252 17:15:28 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:11:19.252 17:15:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:19.252 17:15:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:19.252 17:15:28 -- common/autotest_common.sh@10 -- # set +x 00:11:19.511 ************************************ 00:11:19.511 START TEST nvmf_lvs_grow 00:11:19.511 ************************************ 00:11:19.511 17:15:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:11:19.511 * Looking for test storage... 00:11:19.511 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:19.511 17:15:28 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.511 17:15:28 -- nvmf/common.sh@7 -- # uname -s 00:11:19.511 17:15:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.511 17:15:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.511 17:15:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.511 17:15:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.511 17:15:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.511 17:15:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.511 17:15:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.511 17:15:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.511 17:15:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.511 17:15:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.511 17:15:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:19.511 17:15:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:19.511 17:15:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.511 17:15:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.511 17:15:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.511 17:15:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.511 17:15:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:19.511 17:15:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.511 17:15:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.511 17:15:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.511 17:15:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.511 17:15:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.511 17:15:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.511 17:15:28 -- paths/export.sh@5 -- # export PATH 00:11:19.511 17:15:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.511 17:15:28 -- nvmf/common.sh@47 -- # : 0 00:11:19.511 17:15:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.511 17:15:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.511 17:15:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.511 17:15:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.511 17:15:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.511 17:15:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.511 17:15:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.511 17:15:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.511 17:15:28 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:19.511 17:15:28 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:19.511 17:15:28 -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:19.511 17:15:28 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:19.511 17:15:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.511 17:15:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:19.511 17:15:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:19.511 17:15:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:19.511 17:15:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.511 17:15:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.511 17:15:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.511 17:15:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:19.511 17:15:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:19.511 17:15:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:19.511 17:15:28 -- common/autotest_common.sh@10 -- # set +x 00:11:24.851 17:15:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:24.851 17:15:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:24.851 17:15:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:24.851 17:15:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:24.851 17:15:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:24.851 17:15:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:24.851 17:15:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:24.851 17:15:34 -- nvmf/common.sh@295 -- # net_devs=() 00:11:24.851 17:15:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:24.851 17:15:34 -- nvmf/common.sh@296 -- # e810=() 00:11:24.851 17:15:34 -- nvmf/common.sh@296 -- # local -ga e810 00:11:24.851 17:15:34 -- nvmf/common.sh@297 -- # x722=() 00:11:24.851 17:15:34 -- nvmf/common.sh@297 -- # local -ga x722 00:11:24.851 17:15:34 -- nvmf/common.sh@298 -- # mlx=() 00:11:24.851 17:15:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:24.851 17:15:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.851 17:15:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.851 17:15:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.851 17:15:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.851 17:15:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.851 17:15:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.851 17:15:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.851 17:15:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.851 17:15:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.851 17:15:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.851 17:15:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.851 17:15:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:24.851 17:15:34 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:24.851 17:15:34 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:24.851 17:15:34 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:24.851 17:15:34 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:24.851 17:15:34 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:24.851 17:15:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:24.851 17:15:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.851 17:15:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:24.851 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:24.851 17:15:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:24.851 17:15:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:24.851 17:15:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:24.851 17:15:34 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:24.851 17:15:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:24.851 17:15:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:24.851 17:15:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.851 17:15:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:24.851 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:24.851 17:15:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:24.851 17:15:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:24.851 17:15:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:24.851 17:15:34 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:24.851 17:15:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:24.851 17:15:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:24.851 17:15:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:24.851 17:15:34 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:24.851 17:15:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.851 17:15:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.111 17:15:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:25.111 17:15:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.111 17:15:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:25.111 Found net devices under 0000:da:00.0: mlx_0_0 00:11:25.111 17:15:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.111 17:15:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.111 17:15:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.111 17:15:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:25.111 17:15:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.111 17:15:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:25.111 Found net devices under 0000:da:00.1: mlx_0_1 00:11:25.111 17:15:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.111 17:15:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:25.111 17:15:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:25.111 17:15:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:25.111 17:15:34 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:25.111 17:15:34 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:25.111 17:15:34 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:25.111 17:15:34 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:25.111 17:15:34 -- nvmf/common.sh@58 -- # uname 00:11:25.111 17:15:34 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:25.111 17:15:34 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:25.111 17:15:34 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:25.111 17:15:34 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:25.111 17:15:34 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:25.111 17:15:34 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:25.111 17:15:34 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:25.111 17:15:34 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:25.111 17:15:34 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:25.111 17:15:34 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:25.111 17:15:34 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:25.111 17:15:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:25.111 17:15:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:25.111 17:15:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:25.111 17:15:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:25.111 17:15:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:25.111 17:15:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:25.111 17:15:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.111 17:15:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:25.111 17:15:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:25.111 17:15:34 -- nvmf/common.sh@105 -- # continue 2 00:11:25.111 17:15:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:25.111 17:15:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.111 17:15:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:25.111 17:15:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.111 17:15:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:25.111 17:15:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:25.111 17:15:34 -- nvmf/common.sh@105 -- # continue 2 00:11:25.111 17:15:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:25.111 17:15:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:25.111 17:15:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:25.111 17:15:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:25.111 17:15:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:25.111 17:15:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:25.111 17:15:34 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:25.111 17:15:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:25.111 17:15:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:25.111 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:25.111 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:25.111 altname enp218s0f0np0 00:11:25.111 altname ens818f0np0 00:11:25.111 inet 192.168.100.8/24 scope global mlx_0_0 00:11:25.111 valid_lft forever preferred_lft forever 00:11:25.111 17:15:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:25.111 17:15:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:25.111 17:15:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:25.111 17:15:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:25.111 17:15:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:25.111 17:15:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:25.111 17:15:34 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:25.111 17:15:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:25.111 17:15:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:25.111 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:25.111 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:25.111 altname enp218s0f1np1 00:11:25.111 altname ens818f1np1 00:11:25.111 inet 192.168.100.9/24 scope global mlx_0_1 00:11:25.111 valid_lft forever preferred_lft forever 00:11:25.111 17:15:34 -- nvmf/common.sh@411 -- # return 0 00:11:25.111 17:15:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:25.111 17:15:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:25.111 17:15:34 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:25.111 17:15:34 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:25.112 17:15:34 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:25.112 17:15:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:25.112 17:15:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:25.112 17:15:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:25.112 17:15:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:25.112 17:15:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:25.112 17:15:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:25.112 17:15:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.112 17:15:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:25.112 17:15:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:25.112 17:15:34 -- nvmf/common.sh@105 -- # continue 2 00:11:25.112 17:15:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:25.112 17:15:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.112 17:15:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:25.112 17:15:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.112 17:15:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:25.112 17:15:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:25.112 17:15:34 -- nvmf/common.sh@105 -- # continue 2 00:11:25.112 17:15:34 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:25.112 17:15:34 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:25.112 17:15:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:25.112 17:15:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:25.112 17:15:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:25.112 17:15:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:25.112 17:15:34 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:25.112 17:15:34 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:25.112 17:15:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:25.112 17:15:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:25.112 17:15:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:25.112 17:15:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:25.112 17:15:34 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:25.112 192.168.100.9' 00:11:25.112 17:15:34 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:25.112 192.168.100.9' 00:11:25.112 17:15:34 -- nvmf/common.sh@446 -- # head -n 1 00:11:25.112 17:15:34 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:25.112 17:15:34 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:25.112 192.168.100.9' 00:11:25.112 17:15:34 -- nvmf/common.sh@447 -- # tail -n +2 00:11:25.112 17:15:34 -- nvmf/common.sh@447 -- # head -n 1 00:11:25.112 17:15:34 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:25.112 17:15:34 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:25.112 17:15:34 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:25.112 17:15:34 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:25.112 17:15:34 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:25.112 17:15:34 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:25.112 17:15:34 -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:25.112 17:15:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:25.112 17:15:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:25.112 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:11:25.112 17:15:34 -- nvmf/common.sh@470 -- # nvmfpid=2998047 00:11:25.112 17:15:34 -- nvmf/common.sh@471 -- # waitforlisten 2998047 00:11:25.112 17:15:34 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:25.112 17:15:34 -- common/autotest_common.sh@817 -- # '[' -z 2998047 ']' 00:11:25.112 17:15:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.112 17:15:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:25.112 17:15:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.112 17:15:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:25.112 17:15:34 -- common/autotest_common.sh@10 -- # set +x 00:11:25.112 [2024-04-24 17:15:34.355302] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:25.112 [2024-04-24 17:15:34.355348] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.372 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.372 [2024-04-24 17:15:34.411995] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.372 [2024-04-24 17:15:34.484883] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.372 [2024-04-24 17:15:34.484924] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.372 [2024-04-24 17:15:34.484930] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.372 [2024-04-24 17:15:34.484936] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.372 [2024-04-24 17:15:34.484940] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.372 [2024-04-24 17:15:34.484956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.939 17:15:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:25.939 17:15:35 -- common/autotest_common.sh@850 -- # return 0 00:11:25.939 17:15:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:25.939 17:15:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:25.939 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:11:25.939 17:15:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.939 17:15:35 -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:26.198 [2024-04-24 17:15:35.343667] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ea1d70/0x1ea6260) succeed. 00:11:26.198 [2024-04-24 17:15:35.352433] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ea3270/0x1ee78f0) succeed. 00:11:26.198 17:15:35 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:26.198 17:15:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:26.198 17:15:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:26.198 17:15:35 -- common/autotest_common.sh@10 -- # set +x 00:11:26.456 ************************************ 00:11:26.456 START TEST lvs_grow_clean 00:11:26.456 ************************************ 00:11:26.456 17:15:35 -- common/autotest_common.sh@1111 -- # lvs_grow 00:11:26.456 17:15:35 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:26.456 17:15:35 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:26.456 17:15:35 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:26.456 17:15:35 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:26.456 17:15:35 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:26.456 17:15:35 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:26.456 17:15:35 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:26.456 17:15:35 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:26.456 17:15:35 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:26.456 17:15:35 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:26.456 17:15:35 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:26.714 17:15:35 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a26b5e80-99cc-4ce1-9974-204758aef996 00:11:26.714 17:15:35 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a26b5e80-99cc-4ce1-9974-204758aef996 00:11:26.714 17:15:35 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:26.972 17:15:36 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:26.972 17:15:36 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:26.972 17:15:36 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a26b5e80-99cc-4ce1-9974-204758aef996 lvol 150 00:11:26.972 17:15:36 -- target/nvmf_lvs_grow.sh@33 -- # lvol=2e3eacd7-5878-4ff5-a683-aed1a00102cb 00:11:26.972 17:15:36 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:26.972 17:15:36 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:27.231 [2024-04-24 17:15:36.362570] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:27.231 [2024-04-24 17:15:36.362622] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:27.231 true 00:11:27.231 17:15:36 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a26b5e80-99cc-4ce1-9974-204758aef996 00:11:27.231 17:15:36 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:27.489 17:15:36 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:27.489 17:15:36 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:27.490 17:15:36 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2e3eacd7-5878-4ff5-a683-aed1a00102cb 00:11:27.748 17:15:36 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:28.006 [2024-04-24 17:15:37.036744] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:28.006 17:15:37 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:28.006 17:15:37 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2998138 00:11:28.006 17:15:37 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:28.006 17:15:37 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2998138 /var/tmp/bdevperf.sock 00:11:28.006 17:15:37 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:28.006 17:15:37 -- common/autotest_common.sh@817 -- # '[' -z 2998138 ']' 00:11:28.006 17:15:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:28.006 17:15:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:28.006 17:15:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:28.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:28.007 17:15:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:28.007 17:15:37 -- common/autotest_common.sh@10 -- # set +x 00:11:28.007 [2024-04-24 17:15:37.232701] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:28.007 [2024-04-24 17:15:37.232748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998138 ] 00:11:28.265 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.265 [2024-04-24 17:15:37.285303] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.265 [2024-04-24 17:15:37.353837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.832 17:15:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:28.832 17:15:38 -- common/autotest_common.sh@850 -- # return 0 00:11:28.832 17:15:38 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:29.091 Nvme0n1 00:11:29.091 17:15:38 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:29.349 [ 00:11:29.349 { 00:11:29.350 "name": "Nvme0n1", 00:11:29.350 "aliases": [ 00:11:29.350 "2e3eacd7-5878-4ff5-a683-aed1a00102cb" 00:11:29.350 ], 00:11:29.350 "product_name": "NVMe disk", 00:11:29.350 "block_size": 4096, 00:11:29.350 "num_blocks": 38912, 00:11:29.350 "uuid": "2e3eacd7-5878-4ff5-a683-aed1a00102cb", 00:11:29.350 "assigned_rate_limits": { 00:11:29.350 "rw_ios_per_sec": 0, 00:11:29.350 "rw_mbytes_per_sec": 0, 00:11:29.350 "r_mbytes_per_sec": 0, 00:11:29.350 "w_mbytes_per_sec": 0 00:11:29.350 }, 00:11:29.350 "claimed": false, 00:11:29.350 "zoned": false, 00:11:29.350 "supported_io_types": { 00:11:29.350 "read": true, 00:11:29.350 "write": true, 00:11:29.350 "unmap": true, 00:11:29.350 "write_zeroes": true, 00:11:29.350 "flush": true, 00:11:29.350 "reset": true, 00:11:29.350 "compare": true, 00:11:29.350 "compare_and_write": true, 00:11:29.350 "abort": true, 00:11:29.350 "nvme_admin": true, 00:11:29.350 "nvme_io": true 00:11:29.350 }, 00:11:29.350 "memory_domains": [ 00:11:29.350 { 00:11:29.350 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:11:29.350 "dma_device_type": 0 00:11:29.350 } 00:11:29.350 ], 00:11:29.350 "driver_specific": { 00:11:29.350 "nvme": [ 00:11:29.350 { 00:11:29.350 "trid": { 00:11:29.350 "trtype": "RDMA", 00:11:29.350 "adrfam": "IPv4", 00:11:29.350 "traddr": "192.168.100.8", 00:11:29.350 "trsvcid": "4420", 00:11:29.350 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:29.350 }, 00:11:29.350 "ctrlr_data": { 00:11:29.350 "cntlid": 1, 00:11:29.350 "vendor_id": "0x8086", 00:11:29.350 "model_number": "SPDK bdev Controller", 00:11:29.350 "serial_number": "SPDK0", 00:11:29.350 "firmware_revision": "24.05", 00:11:29.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:29.350 "oacs": { 00:11:29.350 "security": 0, 00:11:29.350 "format": 0, 00:11:29.350 "firmware": 0, 00:11:29.350 "ns_manage": 0 00:11:29.350 }, 00:11:29.350 "multi_ctrlr": true, 00:11:29.350 "ana_reporting": false 00:11:29.350 }, 00:11:29.350 "vs": { 00:11:29.350 "nvme_version": "1.3" 00:11:29.350 }, 00:11:29.350 "ns_data": { 00:11:29.350 "id": 1, 00:11:29.350 "can_share": true 00:11:29.350 } 00:11:29.350 } 00:11:29.350 ], 00:11:29.350 "mp_policy": "active_passive" 00:11:29.350 } 00:11:29.350 } 00:11:29.350 ] 00:11:29.350 17:15:38 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2998158 00:11:29.350 17:15:38 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:29.350 17:15:38 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:29.350 Running I/O for 10 seconds... 00:11:30.286 Latency(us) 00:11:30.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:30.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:30.286 Nvme0n1 : 1.00 35870.00 140.12 0.00 0.00 0.00 0.00 0.00 00:11:30.286 =================================================================================================================== 00:11:30.286 Total : 35870.00 140.12 0.00 0.00 0.00 0.00 0.00 00:11:30.286 00:11:31.223 17:15:40 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a26b5e80-99cc-4ce1-9974-204758aef996 00:11:31.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.481 Nvme0n1 : 2.00 35953.00 140.44 0.00 0.00 0.00 0.00 0.00 00:11:31.481 =================================================================================================================== 00:11:31.481 Total : 35953.00 140.44 0.00 0.00 0.00 0.00 0.00 00:11:31.481 00:11:31.481 true 00:11:31.481 17:15:40 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a26b5e80-99cc-4ce1-9974-204758aef996 00:11:31.481 17:15:40 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:31.743 17:15:40 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:31.743 17:15:40 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:31.743 17:15:40 -- target/nvmf_lvs_grow.sh@65 -- # wait 2998158 00:11:32.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:32.311 Nvme0n1 : 3.00 36128.00 141.12 0.00 0.00 0.00 0.00 0.00 00:11:32.311 =================================================================================================================== 00:11:32.311 Total : 36128.00 141.12 0.00 0.00 0.00 0.00 0.00 00:11:32.311 00:11:33.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.697 Nvme0n1 : 4.00 36287.25 141.75 0.00 0.00 0.00 0.00 0.00 00:11:33.697 =================================================================================================================== 00:11:33.697 Total : 36287.25 141.75 0.00 0.00 0.00 0.00 0.00 00:11:33.697 00:11:34.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.632 Nvme0n1 : 5.00 36383.00 142.12 0.00 0.00 0.00 0.00 0.00 00:11:34.632 =================================================================================================================== 00:11:34.632 Total : 36383.00 142.12 0.00 0.00 0.00 0.00 0.00 00:11:34.632 00:11:35.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.567 Nvme0n1 : 6.00 36454.33 142.40 0.00 0.00 0.00 0.00 0.00 00:11:35.567 =================================================================================================================== 00:11:35.567 Total : 36454.33 142.40 0.00 0.00 0.00 0.00 0.00 00:11:35.567 00:11:36.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.503 Nvme0n1 : 7.00 36511.29 142.62 0.00 0.00 0.00 0.00 0.00 00:11:36.503 =================================================================================================================== 00:11:36.503 Total : 36511.29 142.62 0.00 0.00 0.00 0.00 0.00 00:11:36.503 00:11:37.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.439 Nvme0n1 : 8.00 36548.75 142.77 0.00 0.00 0.00 0.00 0.00 00:11:37.439 =================================================================================================================== 00:11:37.439 Total : 36548.75 142.77 0.00 0.00 0.00 0.00 0.00 00:11:37.439 00:11:38.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.375 Nvme0n1 : 9.00 36580.11 142.89 0.00 0.00 0.00 0.00 0.00 00:11:38.375 =================================================================================================================== 00:11:38.375 Total : 36580.11 142.89 0.00 0.00 0.00 0.00 0.00 00:11:38.375 00:11:39.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.312 Nvme0n1 : 10.00 36605.10 142.99 0.00 0.00 0.00 0.00 0.00 00:11:39.312 =================================================================================================================== 00:11:39.312 Total : 36605.10 142.99 0.00 0.00 0.00 0.00 0.00 00:11:39.312 00:11:39.312 00:11:39.312 Latency(us) 00:11:39.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.312 Nvme0n1 : 10.00 36605.76 142.99 0.00 0.00 3493.98 2637.04 14105.84 00:11:39.312 =================================================================================================================== 00:11:39.312 Total : 36605.76 142.99 0.00 0.00 3493.98 2637.04 14105.84 00:11:39.312 0 00:11:39.312 17:15:48 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2998138 00:11:39.312 17:15:48 -- common/autotest_common.sh@936 -- # '[' -z 2998138 ']' 00:11:39.312 17:15:48 -- common/autotest_common.sh@940 -- # kill -0 2998138 00:11:39.312 17:15:48 -- common/autotest_common.sh@941 -- # uname 00:11:39.312 17:15:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:39.571 17:15:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2998138 00:11:39.571 17:15:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:39.571 17:15:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:39.571 17:15:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2998138' 00:11:39.571 killing process with pid 2998138 00:11:39.571 17:15:48 -- common/autotest_common.sh@955 -- # kill 2998138 00:11:39.571 Received shutdown signal, test time was about 10.000000 seconds 00:11:39.571 00:11:39.571 Latency(us) 00:11:39.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.571 =================================================================================================================== 00:11:39.571 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:39.571 17:15:48 -- common/autotest_common.sh@960 -- # wait 2998138 00:11:39.571 17:15:48 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:39.829 17:15:48 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:40.088 17:15:49 -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a26b5e80-99cc-4ce1-9974-204758aef996 00:11:40.088 17:15:49 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:40.088 17:15:49 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:40.088 17:15:49 -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:40.088 17:15:49 -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:40.346 [2024-04-24 17:15:49.471281] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:40.346 17:15:49 -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a26b5e80-99cc-4ce1-9974-204758aef996 00:11:40.346 17:15:49 -- common/autotest_common.sh@638 -- # local es=0 00:11:40.346 17:15:49 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a26b5e80-99cc-4ce1-9974-204758aef996 00:11:40.346 17:15:49 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:40.346 17:15:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:40.346 17:15:49 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:40.346 17:15:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:40.346 17:15:49 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:40.346 17:15:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:40.346 17:15:49 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:40.346 17:15:49 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:40.346 17:15:49 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a26b5e80-99cc-4ce1-9974-204758aef996 00:11:40.604 request: 00:11:40.604 { 00:11:40.604 "uuid": "a26b5e80-99cc-4ce1-9974-204758aef996", 00:11:40.604 "method": "bdev_lvol_get_lvstores", 00:11:40.604 "req_id": 1 00:11:40.604 } 00:11:40.604 Got JSON-RPC error response 00:11:40.604 response: 00:11:40.604 { 00:11:40.604 "code": -19, 00:11:40.604 "message": "No such device" 00:11:40.604 } 00:11:40.604 17:15:49 -- common/autotest_common.sh@641 -- # es=1 00:11:40.604 17:15:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:40.604 17:15:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:40.604 17:15:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:40.604 17:15:49 -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:40.604 aio_bdev 00:11:40.862 17:15:49 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2e3eacd7-5878-4ff5-a683-aed1a00102cb 00:11:40.862 17:15:49 -- common/autotest_common.sh@885 -- # local bdev_name=2e3eacd7-5878-4ff5-a683-aed1a00102cb 00:11:40.862 17:15:49 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:40.862 17:15:49 -- common/autotest_common.sh@887 -- # local i 00:11:40.862 17:15:49 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:40.862 17:15:49 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:40.862 17:15:49 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:40.862 17:15:50 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2e3eacd7-5878-4ff5-a683-aed1a00102cb -t 2000 00:11:41.120 [ 00:11:41.120 { 00:11:41.120 "name": "2e3eacd7-5878-4ff5-a683-aed1a00102cb", 00:11:41.120 "aliases": [ 00:11:41.120 "lvs/lvol" 00:11:41.120 ], 00:11:41.120 "product_name": "Logical Volume", 00:11:41.120 "block_size": 4096, 00:11:41.120 "num_blocks": 38912, 00:11:41.120 "uuid": "2e3eacd7-5878-4ff5-a683-aed1a00102cb", 00:11:41.120 "assigned_rate_limits": { 00:11:41.120 "rw_ios_per_sec": 0, 00:11:41.120 "rw_mbytes_per_sec": 0, 00:11:41.120 "r_mbytes_per_sec": 0, 00:11:41.120 "w_mbytes_per_sec": 0 00:11:41.120 }, 00:11:41.120 "claimed": false, 00:11:41.120 "zoned": false, 00:11:41.120 "supported_io_types": { 00:11:41.120 "read": true, 00:11:41.120 "write": true, 00:11:41.120 "unmap": true, 00:11:41.120 "write_zeroes": true, 00:11:41.120 "flush": false, 00:11:41.120 "reset": true, 00:11:41.120 "compare": false, 00:11:41.120 "compare_and_write": false, 00:11:41.120 "abort": false, 00:11:41.120 "nvme_admin": false, 00:11:41.120 "nvme_io": false 00:11:41.120 }, 00:11:41.120 "driver_specific": { 00:11:41.120 "lvol": { 00:11:41.120 "lvol_store_uuid": "a26b5e80-99cc-4ce1-9974-204758aef996", 00:11:41.120 "base_bdev": "aio_bdev", 00:11:41.120 "thin_provision": false, 00:11:41.120 "snapshot": false, 00:11:41.120 "clone": false, 00:11:41.120 "esnap_clone": false 00:11:41.120 } 00:11:41.120 } 00:11:41.120 } 00:11:41.120 ] 00:11:41.120 17:15:50 -- common/autotest_common.sh@893 -- # return 0 00:11:41.121 17:15:50 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a26b5e80-99cc-4ce1-9974-204758aef996 00:11:41.121 17:15:50 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:41.121 17:15:50 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:41.121 17:15:50 -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a26b5e80-99cc-4ce1-9974-204758aef996 00:11:41.121 17:15:50 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:41.379 17:15:50 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:41.379 17:15:50 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2e3eacd7-5878-4ff5-a683-aed1a00102cb 00:11:41.636 17:15:50 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a26b5e80-99cc-4ce1-9974-204758aef996 00:11:41.636 17:15:50 -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:41.894 17:15:51 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:41.894 00:11:41.894 real 0m15.531s 00:11:41.894 user 0m15.587s 00:11:41.894 sys 0m0.974s 00:11:41.895 17:15:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:41.895 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:11:41.895 ************************************ 00:11:41.895 END TEST lvs_grow_clean 00:11:41.895 ************************************ 00:11:41.895 17:15:51 -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:41.895 17:15:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:41.895 17:15:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:41.895 17:15:51 -- common/autotest_common.sh@10 -- # set +x 00:11:42.154 ************************************ 00:11:42.154 START TEST lvs_grow_dirty 00:11:42.154 ************************************ 00:11:42.154 17:15:51 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:11:42.154 17:15:51 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:42.154 17:15:51 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:42.154 17:15:51 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:42.154 17:15:51 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:42.154 17:15:51 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:42.154 17:15:51 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:42.154 17:15:51 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:42.154 17:15:51 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:42.154 17:15:51 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:42.154 17:15:51 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:42.154 17:15:51 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:42.412 17:15:51 -- target/nvmf_lvs_grow.sh@28 -- # lvs=2ea4c693-5db4-4c91-b8dc-73660cd60195 00:11:42.412 17:15:51 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ea4c693-5db4-4c91-b8dc-73660cd60195 00:11:42.412 17:15:51 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:42.671 17:15:51 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:42.671 17:15:51 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:42.671 17:15:51 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2ea4c693-5db4-4c91-b8dc-73660cd60195 lvol 150 00:11:42.671 17:15:51 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b3048f37-02e3-4c14-81a8-3004ee293ee3 00:11:42.671 17:15:51 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:42.671 17:15:51 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:42.929 [2024-04-24 17:15:52.033485] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:42.929 [2024-04-24 17:15:52.033539] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:42.929 true 00:11:42.929 17:15:52 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ea4c693-5db4-4c91-b8dc-73660cd60195 00:11:42.929 17:15:52 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:43.186 17:15:52 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:43.186 17:15:52 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:43.186 17:15:52 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b3048f37-02e3-4c14-81a8-3004ee293ee3 00:11:43.444 17:15:52 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:43.444 [2024-04-24 17:15:52.687611] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:43.703 17:15:52 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:43.703 17:15:52 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2998422 00:11:43.703 17:15:52 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:43.703 17:15:52 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:43.703 17:15:52 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2998422 /var/tmp/bdevperf.sock 00:11:43.703 17:15:52 -- common/autotest_common.sh@817 -- # '[' -z 2998422 ']' 00:11:43.703 17:15:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:43.703 17:15:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:43.703 17:15:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:43.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:43.703 17:15:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:43.703 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:11:43.703 [2024-04-24 17:15:52.907245] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:43.703 [2024-04-24 17:15:52.907296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998422 ] 00:11:43.703 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.962 [2024-04-24 17:15:52.961155] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.962 [2024-04-24 17:15:53.038060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.530 17:15:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:44.530 17:15:53 -- common/autotest_common.sh@850 -- # return 0 00:11:44.530 17:15:53 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:44.789 Nvme0n1 00:11:44.789 17:15:53 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:45.048 [ 00:11:45.048 { 00:11:45.048 "name": "Nvme0n1", 00:11:45.048 "aliases": [ 00:11:45.048 "b3048f37-02e3-4c14-81a8-3004ee293ee3" 00:11:45.048 ], 00:11:45.048 "product_name": "NVMe disk", 00:11:45.048 "block_size": 4096, 00:11:45.048 "num_blocks": 38912, 00:11:45.048 "uuid": "b3048f37-02e3-4c14-81a8-3004ee293ee3", 00:11:45.048 "assigned_rate_limits": { 00:11:45.048 "rw_ios_per_sec": 0, 00:11:45.048 "rw_mbytes_per_sec": 0, 00:11:45.048 "r_mbytes_per_sec": 0, 00:11:45.048 "w_mbytes_per_sec": 0 00:11:45.048 }, 00:11:45.048 "claimed": false, 00:11:45.048 "zoned": false, 00:11:45.048 "supported_io_types": { 00:11:45.048 "read": true, 00:11:45.048 "write": true, 00:11:45.048 "unmap": true, 00:11:45.048 "write_zeroes": true, 00:11:45.048 "flush": true, 00:11:45.048 "reset": true, 00:11:45.048 "compare": true, 00:11:45.048 "compare_and_write": true, 00:11:45.048 "abort": true, 00:11:45.048 "nvme_admin": true, 00:11:45.048 "nvme_io": true 00:11:45.048 }, 00:11:45.048 "memory_domains": [ 00:11:45.048 { 00:11:45.048 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:11:45.048 "dma_device_type": 0 00:11:45.048 } 00:11:45.048 ], 00:11:45.048 "driver_specific": { 00:11:45.048 "nvme": [ 00:11:45.048 { 00:11:45.048 "trid": { 00:11:45.048 "trtype": "RDMA", 00:11:45.048 "adrfam": "IPv4", 00:11:45.048 "traddr": "192.168.100.8", 00:11:45.048 "trsvcid": "4420", 00:11:45.048 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:45.048 }, 00:11:45.048 "ctrlr_data": { 00:11:45.048 "cntlid": 1, 00:11:45.048 "vendor_id": "0x8086", 00:11:45.048 "model_number": "SPDK bdev Controller", 00:11:45.048 "serial_number": "SPDK0", 00:11:45.048 "firmware_revision": "24.05", 00:11:45.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:45.048 "oacs": { 00:11:45.048 "security": 0, 00:11:45.048 "format": 0, 00:11:45.048 "firmware": 0, 00:11:45.048 "ns_manage": 0 00:11:45.048 }, 00:11:45.048 "multi_ctrlr": true, 00:11:45.048 "ana_reporting": false 00:11:45.048 }, 00:11:45.048 "vs": { 00:11:45.048 "nvme_version": "1.3" 00:11:45.048 }, 00:11:45.048 "ns_data": { 00:11:45.048 "id": 1, 00:11:45.048 "can_share": true 00:11:45.048 } 00:11:45.048 } 00:11:45.048 ], 00:11:45.048 "mp_policy": "active_passive" 00:11:45.048 } 00:11:45.048 } 00:11:45.048 ] 00:11:45.048 17:15:54 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2998444 00:11:45.048 17:15:54 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:45.048 17:15:54 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:45.048 Running I/O for 10 seconds... 00:11:46.000 Latency(us) 00:11:46.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.000 Nvme0n1 : 1.00 35973.00 140.52 0.00 0.00 0.00 0.00 0.00 00:11:46.000 =================================================================================================================== 00:11:46.000 Total : 35973.00 140.52 0.00 0.00 0.00 0.00 0.00 00:11:46.000 00:11:46.934 17:15:56 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2ea4c693-5db4-4c91-b8dc-73660cd60195 00:11:47.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.193 Nvme0n1 : 2.00 36211.00 141.45 0.00 0.00 0.00 0.00 0.00 00:11:47.193 =================================================================================================================== 00:11:47.193 Total : 36211.00 141.45 0.00 0.00 0.00 0.00 0.00 00:11:47.193 00:11:47.193 true 00:11:47.193 17:15:56 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ea4c693-5db4-4c91-b8dc-73660cd60195 00:11:47.193 17:15:56 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:47.452 17:15:56 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:47.452 17:15:56 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:47.452 17:15:56 -- target/nvmf_lvs_grow.sh@65 -- # wait 2998444 00:11:48.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.020 Nvme0n1 : 3.00 36265.67 141.66 0.00 0.00 0.00 0.00 0.00 00:11:48.020 =================================================================================================================== 00:11:48.020 Total : 36265.67 141.66 0.00 0.00 0.00 0.00 0.00 00:11:48.020 00:11:49.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.400 Nvme0n1 : 4.00 36376.75 142.10 0.00 0.00 0.00 0.00 0.00 00:11:49.400 =================================================================================================================== 00:11:49.400 Total : 36376.75 142.10 0.00 0.00 0.00 0.00 0.00 00:11:49.400 00:11:49.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.967 Nvme0n1 : 5.00 36453.80 142.40 0.00 0.00 0.00 0.00 0.00 00:11:49.967 =================================================================================================================== 00:11:49.967 Total : 36453.80 142.40 0.00 0.00 0.00 0.00 0.00 00:11:49.967 00:11:51.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.373 Nvme0n1 : 6.00 36463.67 142.44 0.00 0.00 0.00 0.00 0.00 00:11:51.373 =================================================================================================================== 00:11:51.373 Total : 36463.67 142.44 0.00 0.00 0.00 0.00 0.00 00:11:51.373 00:11:52.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.382 Nvme0n1 : 7.00 36430.29 142.31 0.00 0.00 0.00 0.00 0.00 00:11:52.382 =================================================================================================================== 00:11:52.382 Total : 36430.29 142.31 0.00 0.00 0.00 0.00 0.00 00:11:52.382 00:11:52.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.978 Nvme0n1 : 8.00 36471.50 142.47 0.00 0.00 0.00 0.00 0.00 00:11:52.978 =================================================================================================================== 00:11:52.978 Total : 36471.50 142.47 0.00 0.00 0.00 0.00 0.00 00:11:52.978 00:11:54.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.352 Nvme0n1 : 9.00 36508.78 142.61 0.00 0.00 0.00 0.00 0.00 00:11:54.352 =================================================================================================================== 00:11:54.352 Total : 36508.78 142.61 0.00 0.00 0.00 0.00 0.00 00:11:54.352 00:11:55.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.287 Nvme0n1 : 10.00 36541.00 142.74 0.00 0.00 0.00 0.00 0.00 00:11:55.287 =================================================================================================================== 00:11:55.287 Total : 36541.00 142.74 0.00 0.00 0.00 0.00 0.00 00:11:55.287 00:11:55.287 00:11:55.287 Latency(us) 00:11:55.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.287 Nvme0n1 : 10.00 36543.21 142.75 0.00 0.00 3499.96 2465.40 10048.85 00:11:55.287 =================================================================================================================== 00:11:55.287 Total : 36543.21 142.75 0.00 0.00 3499.96 2465.40 10048.85 00:11:55.287 0 00:11:55.287 17:16:04 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2998422 00:11:55.287 17:16:04 -- common/autotest_common.sh@936 -- # '[' -z 2998422 ']' 00:11:55.287 17:16:04 -- common/autotest_common.sh@940 -- # kill -0 2998422 00:11:55.287 17:16:04 -- common/autotest_common.sh@941 -- # uname 00:11:55.287 17:16:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:55.287 17:16:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2998422 00:11:55.287 17:16:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:55.287 17:16:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:55.287 17:16:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2998422' 00:11:55.287 killing process with pid 2998422 00:11:55.287 17:16:04 -- common/autotest_common.sh@955 -- # kill 2998422 00:11:55.287 Received shutdown signal, test time was about 10.000000 seconds 00:11:55.287 00:11:55.287 Latency(us) 00:11:55.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.287 =================================================================================================================== 00:11:55.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:55.287 17:16:04 -- common/autotest_common.sh@960 -- # wait 2998422 00:11:55.287 17:16:04 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:55.546 17:16:04 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:55.805 17:16:04 -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ea4c693-5db4-4c91-b8dc-73660cd60195 00:11:55.805 17:16:04 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:55.805 17:16:05 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:55.805 17:16:05 -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:55.805 17:16:05 -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2998047 00:11:55.805 17:16:05 -- target/nvmf_lvs_grow.sh@75 -- # wait 2998047 00:11:56.064 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2998047 Killed "${NVMF_APP[@]}" "$@" 00:11:56.064 17:16:05 -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:56.064 17:16:05 -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:56.064 17:16:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:56.064 17:16:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:56.064 17:16:05 -- common/autotest_common.sh@10 -- # set +x 00:11:56.064 17:16:05 -- nvmf/common.sh@470 -- # nvmfpid=2998605 00:11:56.064 17:16:05 -- nvmf/common.sh@471 -- # waitforlisten 2998605 00:11:56.064 17:16:05 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:56.064 17:16:05 -- common/autotest_common.sh@817 -- # '[' -z 2998605 ']' 00:11:56.064 17:16:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.064 17:16:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:56.064 17:16:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.064 17:16:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:56.064 17:16:05 -- common/autotest_common.sh@10 -- # set +x 00:11:56.064 [2024-04-24 17:16:05.119178] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:56.064 [2024-04-24 17:16:05.119227] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.064 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.064 [2024-04-24 17:16:05.176584] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.064 [2024-04-24 17:16:05.253056] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.064 [2024-04-24 17:16:05.253090] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.064 [2024-04-24 17:16:05.253097] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.064 [2024-04-24 17:16:05.253103] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.064 [2024-04-24 17:16:05.253109] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.064 [2024-04-24 17:16:05.253123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.998 17:16:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:56.998 17:16:05 -- common/autotest_common.sh@850 -- # return 0 00:11:56.998 17:16:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:56.998 17:16:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:56.998 17:16:05 -- common/autotest_common.sh@10 -- # set +x 00:11:56.998 17:16:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.998 17:16:05 -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:56.998 [2024-04-24 17:16:06.101531] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:56.998 [2024-04-24 17:16:06.101628] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:56.998 [2024-04-24 17:16:06.101653] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:56.998 17:16:06 -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:56.998 17:16:06 -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b3048f37-02e3-4c14-81a8-3004ee293ee3 00:11:56.998 17:16:06 -- common/autotest_common.sh@885 -- # local bdev_name=b3048f37-02e3-4c14-81a8-3004ee293ee3 00:11:56.998 17:16:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:56.999 17:16:06 -- common/autotest_common.sh@887 -- # local i 00:11:56.999 17:16:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:56.999 17:16:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:56.999 17:16:06 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:57.256 17:16:06 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b3048f37-02e3-4c14-81a8-3004ee293ee3 -t 2000 00:11:57.256 [ 00:11:57.256 { 00:11:57.256 "name": "b3048f37-02e3-4c14-81a8-3004ee293ee3", 00:11:57.256 "aliases": [ 00:11:57.256 "lvs/lvol" 00:11:57.256 ], 00:11:57.256 "product_name": "Logical Volume", 00:11:57.256 "block_size": 4096, 00:11:57.256 "num_blocks": 38912, 00:11:57.256 "uuid": "b3048f37-02e3-4c14-81a8-3004ee293ee3", 00:11:57.256 "assigned_rate_limits": { 00:11:57.256 "rw_ios_per_sec": 0, 00:11:57.256 "rw_mbytes_per_sec": 0, 00:11:57.256 "r_mbytes_per_sec": 0, 00:11:57.256 "w_mbytes_per_sec": 0 00:11:57.256 }, 00:11:57.256 "claimed": false, 00:11:57.256 "zoned": false, 00:11:57.256 "supported_io_types": { 00:11:57.256 "read": true, 00:11:57.256 "write": true, 00:11:57.256 "unmap": true, 00:11:57.256 "write_zeroes": true, 00:11:57.256 "flush": false, 00:11:57.256 "reset": true, 00:11:57.256 "compare": false, 00:11:57.256 "compare_and_write": false, 00:11:57.256 "abort": false, 00:11:57.256 "nvme_admin": false, 00:11:57.256 "nvme_io": false 00:11:57.256 }, 00:11:57.256 "driver_specific": { 00:11:57.256 "lvol": { 00:11:57.256 "lvol_store_uuid": "2ea4c693-5db4-4c91-b8dc-73660cd60195", 00:11:57.256 "base_bdev": "aio_bdev", 00:11:57.256 "thin_provision": false, 00:11:57.256 "snapshot": false, 00:11:57.256 "clone": false, 00:11:57.256 "esnap_clone": false 00:11:57.256 } 00:11:57.256 } 00:11:57.256 } 00:11:57.256 ] 00:11:57.256 17:16:06 -- common/autotest_common.sh@893 -- # return 0 00:11:57.256 17:16:06 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ea4c693-5db4-4c91-b8dc-73660cd60195 00:11:57.256 17:16:06 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:57.515 17:16:06 -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:57.515 17:16:06 -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ea4c693-5db4-4c91-b8dc-73660cd60195 00:11:57.515 17:16:06 -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:57.774 17:16:06 -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:57.774 17:16:06 -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:57.774 [2024-04-24 17:16:06.937929] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:57.774 17:16:06 -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ea4c693-5db4-4c91-b8dc-73660cd60195 00:11:57.774 17:16:06 -- common/autotest_common.sh@638 -- # local es=0 00:11:57.774 17:16:06 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ea4c693-5db4-4c91-b8dc-73660cd60195 00:11:57.774 17:16:06 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:57.774 17:16:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:57.774 17:16:06 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:57.774 17:16:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:57.774 17:16:06 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:57.774 17:16:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:57.774 17:16:06 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:57.774 17:16:06 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:57.774 17:16:06 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ea4c693-5db4-4c91-b8dc-73660cd60195 00:11:58.032 request: 00:11:58.032 { 00:11:58.032 "uuid": "2ea4c693-5db4-4c91-b8dc-73660cd60195", 00:11:58.032 "method": "bdev_lvol_get_lvstores", 00:11:58.032 "req_id": 1 00:11:58.032 } 00:11:58.032 Got JSON-RPC error response 00:11:58.032 response: 00:11:58.032 { 00:11:58.032 "code": -19, 00:11:58.032 "message": "No such device" 00:11:58.032 } 00:11:58.032 17:16:07 -- common/autotest_common.sh@641 -- # es=1 00:11:58.032 17:16:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:58.032 17:16:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:58.032 17:16:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:58.032 17:16:07 -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:58.291 aio_bdev 00:11:58.291 17:16:07 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b3048f37-02e3-4c14-81a8-3004ee293ee3 00:11:58.291 17:16:07 -- common/autotest_common.sh@885 -- # local bdev_name=b3048f37-02e3-4c14-81a8-3004ee293ee3 00:11:58.291 17:16:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:58.291 17:16:07 -- common/autotest_common.sh@887 -- # local i 00:11:58.291 17:16:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:58.291 17:16:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:58.291 17:16:07 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:58.291 17:16:07 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b3048f37-02e3-4c14-81a8-3004ee293ee3 -t 2000 00:11:58.548 [ 00:11:58.548 { 00:11:58.548 "name": "b3048f37-02e3-4c14-81a8-3004ee293ee3", 00:11:58.548 "aliases": [ 00:11:58.548 "lvs/lvol" 00:11:58.548 ], 00:11:58.548 "product_name": "Logical Volume", 00:11:58.548 "block_size": 4096, 00:11:58.548 "num_blocks": 38912, 00:11:58.548 "uuid": "b3048f37-02e3-4c14-81a8-3004ee293ee3", 00:11:58.548 "assigned_rate_limits": { 00:11:58.548 "rw_ios_per_sec": 0, 00:11:58.548 "rw_mbytes_per_sec": 0, 00:11:58.548 "r_mbytes_per_sec": 0, 00:11:58.548 "w_mbytes_per_sec": 0 00:11:58.548 }, 00:11:58.548 "claimed": false, 00:11:58.548 "zoned": false, 00:11:58.548 "supported_io_types": { 00:11:58.548 "read": true, 00:11:58.548 "write": true, 00:11:58.548 "unmap": true, 00:11:58.548 "write_zeroes": true, 00:11:58.548 "flush": false, 00:11:58.548 "reset": true, 00:11:58.548 "compare": false, 00:11:58.548 "compare_and_write": false, 00:11:58.548 "abort": false, 00:11:58.548 "nvme_admin": false, 00:11:58.548 "nvme_io": false 00:11:58.548 }, 00:11:58.548 "driver_specific": { 00:11:58.548 "lvol": { 00:11:58.548 "lvol_store_uuid": "2ea4c693-5db4-4c91-b8dc-73660cd60195", 00:11:58.548 "base_bdev": "aio_bdev", 00:11:58.548 "thin_provision": false, 00:11:58.548 "snapshot": false, 00:11:58.548 "clone": false, 00:11:58.548 "esnap_clone": false 00:11:58.548 } 00:11:58.548 } 00:11:58.548 } 00:11:58.548 ] 00:11:58.548 17:16:07 -- common/autotest_common.sh@893 -- # return 0 00:11:58.548 17:16:07 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ea4c693-5db4-4c91-b8dc-73660cd60195 00:11:58.548 17:16:07 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:58.806 17:16:07 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:58.806 17:16:07 -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ea4c693-5db4-4c91-b8dc-73660cd60195 00:11:58.806 17:16:07 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:58.806 17:16:07 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:58.806 17:16:07 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b3048f37-02e3-4c14-81a8-3004ee293ee3 00:11:59.066 17:16:08 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2ea4c693-5db4-4c91-b8dc-73660cd60195 00:11:59.325 17:16:08 -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:59.325 17:16:08 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:59.325 00:11:59.325 real 0m17.318s 00:11:59.325 user 0m45.516s 00:11:59.325 sys 0m2.873s 00:11:59.325 17:16:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:59.325 17:16:08 -- common/autotest_common.sh@10 -- # set +x 00:11:59.325 ************************************ 00:11:59.325 END TEST lvs_grow_dirty 00:11:59.325 ************************************ 00:11:59.325 17:16:08 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:59.325 17:16:08 -- common/autotest_common.sh@794 -- # type=--id 00:11:59.325 17:16:08 -- common/autotest_common.sh@795 -- # id=0 00:11:59.325 17:16:08 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:11:59.325 17:16:08 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:59.325 17:16:08 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:11:59.325 17:16:08 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:11:59.325 17:16:08 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:11:59.325 17:16:08 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:59.325 nvmf_trace.0 00:11:59.584 17:16:08 -- common/autotest_common.sh@809 -- # return 0 00:11:59.584 17:16:08 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:59.584 17:16:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:59.584 17:16:08 -- nvmf/common.sh@117 -- # sync 00:11:59.584 17:16:08 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:59.584 17:16:08 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:59.584 17:16:08 -- nvmf/common.sh@120 -- # set +e 00:11:59.584 17:16:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:59.584 17:16:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:59.584 rmmod nvme_rdma 00:11:59.584 rmmod nvme_fabrics 00:11:59.584 17:16:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:59.584 17:16:08 -- nvmf/common.sh@124 -- # set -e 00:11:59.584 17:16:08 -- nvmf/common.sh@125 -- # return 0 00:11:59.584 17:16:08 -- nvmf/common.sh@478 -- # '[' -n 2998605 ']' 00:11:59.584 17:16:08 -- nvmf/common.sh@479 -- # killprocess 2998605 00:11:59.584 17:16:08 -- common/autotest_common.sh@936 -- # '[' -z 2998605 ']' 00:11:59.584 17:16:08 -- common/autotest_common.sh@940 -- # kill -0 2998605 00:11:59.584 17:16:08 -- common/autotest_common.sh@941 -- # uname 00:11:59.584 17:16:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:59.584 17:16:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2998605 00:11:59.584 17:16:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:59.584 17:16:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:59.584 17:16:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2998605' 00:11:59.584 killing process with pid 2998605 00:11:59.584 17:16:08 -- common/autotest_common.sh@955 -- # kill 2998605 00:11:59.584 17:16:08 -- common/autotest_common.sh@960 -- # wait 2998605 00:11:59.844 17:16:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:59.844 17:16:08 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:59.844 00:11:59.844 real 0m40.261s 00:11:59.844 user 1m6.902s 00:11:59.844 sys 0m8.535s 00:11:59.844 17:16:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:59.844 17:16:08 -- common/autotest_common.sh@10 -- # set +x 00:11:59.844 ************************************ 00:11:59.844 END TEST nvmf_lvs_grow 00:11:59.844 ************************************ 00:11:59.844 17:16:08 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:11:59.844 17:16:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:59.844 17:16:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:59.844 17:16:08 -- common/autotest_common.sh@10 -- # set +x 00:11:59.844 ************************************ 00:11:59.844 START TEST nvmf_bdev_io_wait 00:11:59.844 ************************************ 00:11:59.844 17:16:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:12:00.104 * Looking for test storage... 00:12:00.104 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:00.104 17:16:09 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.104 17:16:09 -- nvmf/common.sh@7 -- # uname -s 00:12:00.104 17:16:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.104 17:16:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.104 17:16:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.104 17:16:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.104 17:16:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.104 17:16:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.104 17:16:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.104 17:16:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.104 17:16:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.104 17:16:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.104 17:16:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:00.104 17:16:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:00.104 17:16:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.104 17:16:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.104 17:16:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.104 17:16:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.104 17:16:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:00.104 17:16:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.104 17:16:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.104 17:16:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.104 17:16:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.104 17:16:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.104 17:16:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.104 17:16:09 -- paths/export.sh@5 -- # export PATH 00:12:00.104 17:16:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.104 17:16:09 -- nvmf/common.sh@47 -- # : 0 00:12:00.104 17:16:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:00.104 17:16:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:00.104 17:16:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.104 17:16:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.104 17:16:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.104 17:16:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:00.104 17:16:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:00.104 17:16:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:00.104 17:16:09 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:00.104 17:16:09 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:00.104 17:16:09 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:00.104 17:16:09 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:00.104 17:16:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.104 17:16:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:00.104 17:16:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:00.104 17:16:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:00.104 17:16:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.104 17:16:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:00.104 17:16:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.104 17:16:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:00.104 17:16:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:00.104 17:16:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:00.104 17:16:09 -- common/autotest_common.sh@10 -- # set +x 00:12:05.380 17:16:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:05.380 17:16:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:05.380 17:16:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:05.380 17:16:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:05.380 17:16:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:05.380 17:16:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:05.380 17:16:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:05.380 17:16:14 -- nvmf/common.sh@295 -- # net_devs=() 00:12:05.380 17:16:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:05.380 17:16:14 -- nvmf/common.sh@296 -- # e810=() 00:12:05.380 17:16:14 -- nvmf/common.sh@296 -- # local -ga e810 00:12:05.380 17:16:14 -- nvmf/common.sh@297 -- # x722=() 00:12:05.380 17:16:14 -- nvmf/common.sh@297 -- # local -ga x722 00:12:05.380 17:16:14 -- nvmf/common.sh@298 -- # mlx=() 00:12:05.380 17:16:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:05.380 17:16:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.380 17:16:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.380 17:16:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.380 17:16:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.380 17:16:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.380 17:16:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.380 17:16:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.380 17:16:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.380 17:16:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.380 17:16:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.380 17:16:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.380 17:16:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:05.380 17:16:14 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:05.380 17:16:14 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:05.380 17:16:14 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:05.380 17:16:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:05.380 17:16:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.380 17:16:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:05.380 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:05.380 17:16:14 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:05.380 17:16:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.380 17:16:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:05.380 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:05.380 17:16:14 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:05.380 17:16:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:05.380 17:16:14 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.380 17:16:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.380 17:16:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:05.380 17:16:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.380 17:16:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:05.380 Found net devices under 0000:da:00.0: mlx_0_0 00:12:05.380 17:16:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.380 17:16:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.380 17:16:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.380 17:16:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:05.380 17:16:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.380 17:16:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:05.380 Found net devices under 0000:da:00.1: mlx_0_1 00:12:05.380 17:16:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.380 17:16:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:05.380 17:16:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:05.380 17:16:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:05.380 17:16:14 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:05.380 17:16:14 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:05.380 17:16:14 -- nvmf/common.sh@58 -- # uname 00:12:05.380 17:16:14 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:05.380 17:16:14 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:05.380 17:16:14 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:05.380 17:16:14 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:05.380 17:16:14 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:05.380 17:16:14 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:05.380 17:16:14 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:05.380 17:16:14 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:05.380 17:16:14 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:05.380 17:16:14 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:05.380 17:16:14 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:05.380 17:16:14 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:05.380 17:16:14 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:05.380 17:16:14 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:05.380 17:16:14 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:05.380 17:16:14 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:05.380 17:16:14 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:05.380 17:16:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:05.381 17:16:14 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:05.381 17:16:14 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:05.381 17:16:14 -- nvmf/common.sh@105 -- # continue 2 00:12:05.381 17:16:14 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:05.381 17:16:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:05.381 17:16:14 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:05.381 17:16:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:05.381 17:16:14 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:05.381 17:16:14 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:05.381 17:16:14 -- nvmf/common.sh@105 -- # continue 2 00:12:05.381 17:16:14 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:05.381 17:16:14 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:05.381 17:16:14 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:05.381 17:16:14 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:05.381 17:16:14 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:05.381 17:16:14 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:05.381 17:16:14 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:05.381 17:16:14 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:05.381 17:16:14 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:05.381 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:05.381 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:05.381 altname enp218s0f0np0 00:12:05.381 altname ens818f0np0 00:12:05.381 inet 192.168.100.8/24 scope global mlx_0_0 00:12:05.381 valid_lft forever preferred_lft forever 00:12:05.381 17:16:14 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:05.381 17:16:14 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:05.381 17:16:14 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:05.381 17:16:14 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:05.381 17:16:14 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:05.381 17:16:14 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:05.381 17:16:14 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:05.381 17:16:14 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:05.381 17:16:14 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:05.381 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:05.381 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:05.381 altname enp218s0f1np1 00:12:05.381 altname ens818f1np1 00:12:05.381 inet 192.168.100.9/24 scope global mlx_0_1 00:12:05.381 valid_lft forever preferred_lft forever 00:12:05.381 17:16:14 -- nvmf/common.sh@411 -- # return 0 00:12:05.381 17:16:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:05.381 17:16:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:05.381 17:16:14 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:05.381 17:16:14 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:05.381 17:16:14 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:05.381 17:16:14 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:05.381 17:16:14 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:05.381 17:16:14 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:05.381 17:16:14 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:05.381 17:16:14 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:05.381 17:16:14 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:05.381 17:16:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:05.381 17:16:14 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:05.381 17:16:14 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:05.381 17:16:14 -- nvmf/common.sh@105 -- # continue 2 00:12:05.381 17:16:14 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:05.381 17:16:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:05.381 17:16:14 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:05.381 17:16:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:05.381 17:16:14 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:05.381 17:16:14 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:05.381 17:16:14 -- nvmf/common.sh@105 -- # continue 2 00:12:05.381 17:16:14 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:05.381 17:16:14 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:05.381 17:16:14 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:05.381 17:16:14 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:05.381 17:16:14 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:05.381 17:16:14 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:05.381 17:16:14 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:05.381 17:16:14 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:05.381 17:16:14 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:05.381 17:16:14 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:05.381 17:16:14 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:05.381 17:16:14 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:05.381 17:16:14 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:05.381 192.168.100.9' 00:12:05.381 17:16:14 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:05.381 192.168.100.9' 00:12:05.381 17:16:14 -- nvmf/common.sh@446 -- # head -n 1 00:12:05.381 17:16:14 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:05.381 17:16:14 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:05.381 192.168.100.9' 00:12:05.381 17:16:14 -- nvmf/common.sh@447 -- # tail -n +2 00:12:05.381 17:16:14 -- nvmf/common.sh@447 -- # head -n 1 00:12:05.381 17:16:14 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:05.381 17:16:14 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:05.381 17:16:14 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:05.381 17:16:14 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:05.381 17:16:14 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:05.381 17:16:14 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:05.381 17:16:14 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:05.381 17:16:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:05.381 17:16:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:05.381 17:16:14 -- common/autotest_common.sh@10 -- # set +x 00:12:05.381 17:16:14 -- nvmf/common.sh@470 -- # nvmfpid=3000940 00:12:05.381 17:16:14 -- nvmf/common.sh@471 -- # waitforlisten 3000940 00:12:05.381 17:16:14 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:05.381 17:16:14 -- common/autotest_common.sh@817 -- # '[' -z 3000940 ']' 00:12:05.381 17:16:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.381 17:16:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:05.381 17:16:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.381 17:16:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:05.381 17:16:14 -- common/autotest_common.sh@10 -- # set +x 00:12:05.381 [2024-04-24 17:16:14.509030] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:05.381 [2024-04-24 17:16:14.509076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.381 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.381 [2024-04-24 17:16:14.564696] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.640 [2024-04-24 17:16:14.646178] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.640 [2024-04-24 17:16:14.646212] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.640 [2024-04-24 17:16:14.646219] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.640 [2024-04-24 17:16:14.646224] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.640 [2024-04-24 17:16:14.646229] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.640 [2024-04-24 17:16:14.646269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.640 [2024-04-24 17:16:14.646368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.640 [2024-04-24 17:16:14.646462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.640 [2024-04-24 17:16:14.646463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.207 17:16:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:06.207 17:16:15 -- common/autotest_common.sh@850 -- # return 0 00:12:06.207 17:16:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:06.207 17:16:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:06.207 17:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:06.207 17:16:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.207 17:16:15 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:06.207 17:16:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.207 17:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:06.207 17:16:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.207 17:16:15 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:06.207 17:16:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.207 17:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:06.207 17:16:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.207 17:16:15 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:06.207 17:16:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.207 17:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:06.207 [2024-04-24 17:16:15.436907] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdcdfb0/0xdd24a0) succeed. 00:12:06.207 [2024-04-24 17:16:15.446727] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdcf5a0/0xe13b30) succeed. 00:12:06.467 17:16:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.467 17:16:15 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:06.467 17:16:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.467 17:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:06.467 Malloc0 00:12:06.467 17:16:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.467 17:16:15 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:06.467 17:16:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.467 17:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:06.467 17:16:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.467 17:16:15 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:06.467 17:16:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.467 17:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:06.467 17:16:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.467 17:16:15 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:06.467 17:16:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.467 17:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:06.467 [2024-04-24 17:16:15.616575] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:06.467 17:16:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.467 17:16:15 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3000976 00:12:06.467 17:16:15 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:06.467 17:16:15 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:06.467 17:16:15 -- target/bdev_io_wait.sh@30 -- # READ_PID=3000978 00:12:06.467 17:16:15 -- nvmf/common.sh@521 -- # config=() 00:12:06.467 17:16:15 -- nvmf/common.sh@521 -- # local subsystem config 00:12:06.467 17:16:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:06.467 17:16:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:06.467 { 00:12:06.467 "params": { 00:12:06.467 "name": "Nvme$subsystem", 00:12:06.467 "trtype": "$TEST_TRANSPORT", 00:12:06.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:06.467 "adrfam": "ipv4", 00:12:06.467 "trsvcid": "$NVMF_PORT", 00:12:06.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:06.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:06.467 "hdgst": ${hdgst:-false}, 00:12:06.467 "ddgst": ${ddgst:-false} 00:12:06.467 }, 00:12:06.467 "method": "bdev_nvme_attach_controller" 00:12:06.467 } 00:12:06.467 EOF 00:12:06.467 )") 00:12:06.467 17:16:15 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3000980 00:12:06.467 17:16:15 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:06.467 17:16:15 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:06.467 17:16:15 -- nvmf/common.sh@521 -- # config=() 00:12:06.467 17:16:15 -- nvmf/common.sh@521 -- # local subsystem config 00:12:06.467 17:16:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:06.467 17:16:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:06.467 { 00:12:06.467 "params": { 00:12:06.467 "name": "Nvme$subsystem", 00:12:06.467 "trtype": "$TEST_TRANSPORT", 00:12:06.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:06.467 "adrfam": "ipv4", 00:12:06.468 "trsvcid": "$NVMF_PORT", 00:12:06.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:06.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:06.468 "hdgst": ${hdgst:-false}, 00:12:06.468 "ddgst": ${ddgst:-false} 00:12:06.468 }, 00:12:06.468 "method": "bdev_nvme_attach_controller" 00:12:06.468 } 00:12:06.468 EOF 00:12:06.468 )") 00:12:06.468 17:16:15 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3000983 00:12:06.468 17:16:15 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:06.468 17:16:15 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:06.468 17:16:15 -- target/bdev_io_wait.sh@35 -- # sync 00:12:06.468 17:16:15 -- nvmf/common.sh@543 -- # cat 00:12:06.468 17:16:15 -- nvmf/common.sh@521 -- # config=() 00:12:06.468 17:16:15 -- nvmf/common.sh@521 -- # local subsystem config 00:12:06.468 17:16:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:06.468 17:16:15 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:06.468 17:16:15 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:06.468 17:16:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:06.468 { 00:12:06.468 "params": { 00:12:06.468 "name": "Nvme$subsystem", 00:12:06.468 "trtype": "$TEST_TRANSPORT", 00:12:06.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:06.468 "adrfam": "ipv4", 00:12:06.468 "trsvcid": "$NVMF_PORT", 00:12:06.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:06.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:06.468 "hdgst": ${hdgst:-false}, 00:12:06.468 "ddgst": ${ddgst:-false} 00:12:06.468 }, 00:12:06.468 "method": "bdev_nvme_attach_controller" 00:12:06.468 } 00:12:06.468 EOF 00:12:06.468 )") 00:12:06.468 17:16:15 -- nvmf/common.sh@521 -- # config=() 00:12:06.468 17:16:15 -- nvmf/common.sh@521 -- # local subsystem config 00:12:06.468 17:16:15 -- nvmf/common.sh@543 -- # cat 00:12:06.468 17:16:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:06.468 17:16:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:06.468 { 00:12:06.468 "params": { 00:12:06.468 "name": "Nvme$subsystem", 00:12:06.468 "trtype": "$TEST_TRANSPORT", 00:12:06.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:06.468 "adrfam": "ipv4", 00:12:06.468 "trsvcid": "$NVMF_PORT", 00:12:06.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:06.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:06.468 "hdgst": ${hdgst:-false}, 00:12:06.468 "ddgst": ${ddgst:-false} 00:12:06.468 }, 00:12:06.468 "method": "bdev_nvme_attach_controller" 00:12:06.468 } 00:12:06.468 EOF 00:12:06.468 )") 00:12:06.468 17:16:15 -- nvmf/common.sh@543 -- # cat 00:12:06.468 17:16:15 -- target/bdev_io_wait.sh@37 -- # wait 3000976 00:12:06.468 17:16:15 -- nvmf/common.sh@543 -- # cat 00:12:06.468 17:16:15 -- nvmf/common.sh@545 -- # jq . 00:12:06.468 17:16:15 -- nvmf/common.sh@545 -- # jq . 00:12:06.468 17:16:15 -- nvmf/common.sh@545 -- # jq . 00:12:06.468 17:16:15 -- nvmf/common.sh@546 -- # IFS=, 00:12:06.468 17:16:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:06.468 "params": { 00:12:06.468 "name": "Nvme1", 00:12:06.468 "trtype": "rdma", 00:12:06.468 "traddr": "192.168.100.8", 00:12:06.468 "adrfam": "ipv4", 00:12:06.468 "trsvcid": "4420", 00:12:06.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.468 "hdgst": false, 00:12:06.468 "ddgst": false 00:12:06.468 }, 00:12:06.468 "method": "bdev_nvme_attach_controller" 00:12:06.468 }' 00:12:06.468 17:16:15 -- nvmf/common.sh@545 -- # jq . 00:12:06.468 17:16:15 -- nvmf/common.sh@546 -- # IFS=, 00:12:06.468 17:16:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:06.468 "params": { 00:12:06.468 "name": "Nvme1", 00:12:06.468 "trtype": "rdma", 00:12:06.468 "traddr": "192.168.100.8", 00:12:06.468 "adrfam": "ipv4", 00:12:06.468 "trsvcid": "4420", 00:12:06.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.468 "hdgst": false, 00:12:06.468 "ddgst": false 00:12:06.468 }, 00:12:06.468 "method": "bdev_nvme_attach_controller" 00:12:06.468 }' 00:12:06.468 17:16:15 -- nvmf/common.sh@546 -- # IFS=, 00:12:06.468 17:16:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:06.468 "params": { 00:12:06.468 "name": "Nvme1", 00:12:06.468 "trtype": "rdma", 00:12:06.468 "traddr": "192.168.100.8", 00:12:06.468 "adrfam": "ipv4", 00:12:06.468 "trsvcid": "4420", 00:12:06.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.468 "hdgst": false, 00:12:06.468 "ddgst": false 00:12:06.468 }, 00:12:06.468 "method": "bdev_nvme_attach_controller" 00:12:06.468 }' 00:12:06.468 17:16:15 -- nvmf/common.sh@546 -- # IFS=, 00:12:06.468 17:16:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:06.468 "params": { 00:12:06.468 "name": "Nvme1", 00:12:06.468 "trtype": "rdma", 00:12:06.468 "traddr": "192.168.100.8", 00:12:06.468 "adrfam": "ipv4", 00:12:06.468 "trsvcid": "4420", 00:12:06.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.468 "hdgst": false, 00:12:06.468 "ddgst": false 00:12:06.468 }, 00:12:06.468 "method": "bdev_nvme_attach_controller" 00:12:06.468 }' 00:12:06.468 [2024-04-24 17:16:15.665017] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:06.468 [2024-04-24 17:16:15.665018] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:06.468 [2024-04-24 17:16:15.665068] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-24 17:16:15.665069] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:06.468 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:06.468 [2024-04-24 17:16:15.665167] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:06.468 [2024-04-24 17:16:15.665205] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:06.468 [2024-04-24 17:16:15.667114] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:06.468 [2024-04-24 17:16:15.667162] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:06.727 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.727 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.727 [2024-04-24 17:16:15.852094] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.727 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.727 [2024-04-24 17:16:15.924466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:06.727 [2024-04-24 17:16:15.944331] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.986 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.986 [2024-04-24 17:16:16.019249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:06.986 [2024-04-24 17:16:16.037535] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.986 [2024-04-24 17:16:16.097992] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.986 [2024-04-24 17:16:16.122650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:06.986 [2024-04-24 17:16:16.173884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:06.986 Running I/O for 1 seconds... 00:12:06.986 Running I/O for 1 seconds... 00:12:07.244 Running I/O for 1 seconds... 00:12:07.244 Running I/O for 1 seconds... 00:12:08.179 00:12:08.179 Latency(us) 00:12:08.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.179 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:08.179 Nvme1n1 : 1.01 18031.73 70.44 0.00 0.00 7076.80 4213.03 13918.60 00:12:08.179 =================================================================================================================== 00:12:08.179 Total : 18031.73 70.44 0.00 0.00 7076.80 4213.03 13918.60 00:12:08.179 00:12:08.179 Latency(us) 00:12:08.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.179 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:08.179 Nvme1n1 : 1.00 17088.97 66.75 0.00 0.00 7469.20 4681.14 17725.93 00:12:08.179 =================================================================================================================== 00:12:08.179 Total : 17088.97 66.75 0.00 0.00 7469.20 4681.14 17725.93 00:12:08.179 00:12:08.179 Latency(us) 00:12:08.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.179 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:08.179 Nvme1n1 : 1.00 15234.35 59.51 0.00 0.00 8382.22 3854.14 19348.72 00:12:08.179 =================================================================================================================== 00:12:08.179 Total : 15234.35 59.51 0.00 0.00 8382.22 3854.14 19348.72 00:12:08.179 00:12:08.179 Latency(us) 00:12:08.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.179 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:08.179 Nvme1n1 : 1.00 261993.65 1023.41 0.00 0.00 486.95 195.05 1771.03 00:12:08.179 =================================================================================================================== 00:12:08.179 Total : 261993.65 1023.41 0.00 0.00 486.95 195.05 1771.03 00:12:08.439 17:16:17 -- target/bdev_io_wait.sh@38 -- # wait 3000978 00:12:08.439 17:16:17 -- target/bdev_io_wait.sh@39 -- # wait 3000980 00:12:08.439 17:16:17 -- target/bdev_io_wait.sh@40 -- # wait 3000983 00:12:08.439 17:16:17 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:08.439 17:16:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:08.439 17:16:17 -- common/autotest_common.sh@10 -- # set +x 00:12:08.439 17:16:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:08.439 17:16:17 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:08.439 17:16:17 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:08.439 17:16:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:08.439 17:16:17 -- nvmf/common.sh@117 -- # sync 00:12:08.439 17:16:17 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:08.439 17:16:17 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:08.439 17:16:17 -- nvmf/common.sh@120 -- # set +e 00:12:08.439 17:16:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:08.439 17:16:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:08.439 rmmod nvme_rdma 00:12:08.439 rmmod nvme_fabrics 00:12:08.439 17:16:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:08.439 17:16:17 -- nvmf/common.sh@124 -- # set -e 00:12:08.439 17:16:17 -- nvmf/common.sh@125 -- # return 0 00:12:08.439 17:16:17 -- nvmf/common.sh@478 -- # '[' -n 3000940 ']' 00:12:08.439 17:16:17 -- nvmf/common.sh@479 -- # killprocess 3000940 00:12:08.439 17:16:17 -- common/autotest_common.sh@936 -- # '[' -z 3000940 ']' 00:12:08.439 17:16:17 -- common/autotest_common.sh@940 -- # kill -0 3000940 00:12:08.439 17:16:17 -- common/autotest_common.sh@941 -- # uname 00:12:08.439 17:16:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:08.439 17:16:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3000940 00:12:08.698 17:16:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:08.698 17:16:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:08.698 17:16:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3000940' 00:12:08.698 killing process with pid 3000940 00:12:08.698 17:16:17 -- common/autotest_common.sh@955 -- # kill 3000940 00:12:08.698 17:16:17 -- common/autotest_common.sh@960 -- # wait 3000940 00:12:08.958 17:16:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:08.958 17:16:17 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:08.958 00:12:08.958 real 0m8.981s 00:12:08.958 user 0m20.525s 00:12:08.958 sys 0m5.246s 00:12:08.958 17:16:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:08.958 17:16:17 -- common/autotest_common.sh@10 -- # set +x 00:12:08.958 ************************************ 00:12:08.958 END TEST nvmf_bdev_io_wait 00:12:08.958 ************************************ 00:12:08.958 17:16:18 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:12:08.958 17:16:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:08.958 17:16:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:08.958 17:16:18 -- common/autotest_common.sh@10 -- # set +x 00:12:08.958 ************************************ 00:12:08.958 START TEST nvmf_queue_depth 00:12:08.958 ************************************ 00:12:08.958 17:16:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:12:09.217 * Looking for test storage... 00:12:09.217 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:09.217 17:16:18 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.217 17:16:18 -- nvmf/common.sh@7 -- # uname -s 00:12:09.217 17:16:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.217 17:16:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.217 17:16:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.217 17:16:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.217 17:16:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.217 17:16:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.217 17:16:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.217 17:16:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.217 17:16:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.217 17:16:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.217 17:16:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:09.217 17:16:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:09.217 17:16:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.217 17:16:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.217 17:16:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.217 17:16:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.217 17:16:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:09.217 17:16:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.217 17:16:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.217 17:16:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.217 17:16:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.217 17:16:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.218 17:16:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.218 17:16:18 -- paths/export.sh@5 -- # export PATH 00:12:09.218 17:16:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.218 17:16:18 -- nvmf/common.sh@47 -- # : 0 00:12:09.218 17:16:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:09.218 17:16:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:09.218 17:16:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.218 17:16:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.218 17:16:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.218 17:16:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:09.218 17:16:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:09.218 17:16:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:09.218 17:16:18 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:09.218 17:16:18 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:09.218 17:16:18 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:09.218 17:16:18 -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:09.218 17:16:18 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:09.218 17:16:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.218 17:16:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:09.218 17:16:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:09.218 17:16:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:09.218 17:16:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.218 17:16:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.218 17:16:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.218 17:16:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:09.218 17:16:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:09.218 17:16:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:09.218 17:16:18 -- common/autotest_common.sh@10 -- # set +x 00:12:14.490 17:16:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:14.490 17:16:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:14.490 17:16:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:14.490 17:16:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:14.490 17:16:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:14.490 17:16:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:14.490 17:16:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:14.490 17:16:22 -- nvmf/common.sh@295 -- # net_devs=() 00:12:14.490 17:16:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:14.490 17:16:22 -- nvmf/common.sh@296 -- # e810=() 00:12:14.490 17:16:22 -- nvmf/common.sh@296 -- # local -ga e810 00:12:14.490 17:16:22 -- nvmf/common.sh@297 -- # x722=() 00:12:14.490 17:16:22 -- nvmf/common.sh@297 -- # local -ga x722 00:12:14.490 17:16:22 -- nvmf/common.sh@298 -- # mlx=() 00:12:14.490 17:16:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:14.490 17:16:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.490 17:16:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.490 17:16:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.490 17:16:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.490 17:16:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.490 17:16:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.490 17:16:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.490 17:16:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.490 17:16:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.490 17:16:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.490 17:16:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.490 17:16:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:14.490 17:16:22 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:14.490 17:16:22 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:14.490 17:16:22 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:14.490 17:16:22 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:14.490 17:16:22 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:14.490 17:16:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:14.490 17:16:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.490 17:16:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:14.490 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:14.490 17:16:22 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:14.490 17:16:22 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:14.490 17:16:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:14.490 17:16:22 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:14.490 17:16:22 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:14.490 17:16:22 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:14.490 17:16:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.490 17:16:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:14.490 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:14.490 17:16:22 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:14.490 17:16:22 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:14.490 17:16:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:14.490 17:16:22 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:14.490 17:16:22 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:14.490 17:16:22 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:14.490 17:16:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:14.490 17:16:22 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:14.490 17:16:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.490 17:16:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.490 17:16:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:14.490 17:16:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.490 17:16:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:14.490 Found net devices under 0000:da:00.0: mlx_0_0 00:12:14.490 17:16:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.490 17:16:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.490 17:16:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.491 17:16:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:14.491 17:16:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.491 17:16:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:14.491 Found net devices under 0000:da:00.1: mlx_0_1 00:12:14.491 17:16:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.491 17:16:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:14.491 17:16:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:14.491 17:16:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:14.491 17:16:22 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:14.491 17:16:22 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:14.491 17:16:22 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:14.491 17:16:22 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:14.491 17:16:22 -- nvmf/common.sh@58 -- # uname 00:12:14.491 17:16:22 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:14.491 17:16:22 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:14.491 17:16:22 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:14.491 17:16:22 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:14.491 17:16:22 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:14.491 17:16:22 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:14.491 17:16:22 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:14.491 17:16:22 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:14.491 17:16:22 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:14.491 17:16:22 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:14.491 17:16:22 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:14.491 17:16:22 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:14.491 17:16:22 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:14.491 17:16:22 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:14.491 17:16:22 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:14.491 17:16:23 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:14.491 17:16:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:14.491 17:16:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:14.491 17:16:23 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:14.491 17:16:23 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:14.491 17:16:23 -- nvmf/common.sh@105 -- # continue 2 00:12:14.491 17:16:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:14.491 17:16:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:14.491 17:16:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:14.491 17:16:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:14.491 17:16:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:14.491 17:16:23 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:14.491 17:16:23 -- nvmf/common.sh@105 -- # continue 2 00:12:14.491 17:16:23 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:14.491 17:16:23 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:14.491 17:16:23 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:14.491 17:16:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:14.491 17:16:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:14.491 17:16:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:14.491 17:16:23 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:14.491 17:16:23 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:14.491 17:16:23 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:14.491 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:14.491 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:14.491 altname enp218s0f0np0 00:12:14.491 altname ens818f0np0 00:12:14.491 inet 192.168.100.8/24 scope global mlx_0_0 00:12:14.491 valid_lft forever preferred_lft forever 00:12:14.491 17:16:23 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:14.491 17:16:23 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:14.491 17:16:23 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:14.491 17:16:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:14.491 17:16:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:14.491 17:16:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:14.491 17:16:23 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:14.491 17:16:23 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:14.491 17:16:23 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:14.491 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:14.491 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:14.491 altname enp218s0f1np1 00:12:14.491 altname ens818f1np1 00:12:14.491 inet 192.168.100.9/24 scope global mlx_0_1 00:12:14.491 valid_lft forever preferred_lft forever 00:12:14.491 17:16:23 -- nvmf/common.sh@411 -- # return 0 00:12:14.491 17:16:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:14.491 17:16:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:14.491 17:16:23 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:14.491 17:16:23 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:14.491 17:16:23 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:14.491 17:16:23 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:14.491 17:16:23 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:14.491 17:16:23 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:14.491 17:16:23 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:14.491 17:16:23 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:14.491 17:16:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:14.491 17:16:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:14.491 17:16:23 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:14.491 17:16:23 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:14.491 17:16:23 -- nvmf/common.sh@105 -- # continue 2 00:12:14.491 17:16:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:14.491 17:16:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:14.491 17:16:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:14.491 17:16:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:14.491 17:16:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:14.491 17:16:23 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:14.491 17:16:23 -- nvmf/common.sh@105 -- # continue 2 00:12:14.491 17:16:23 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:14.491 17:16:23 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:14.491 17:16:23 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:14.491 17:16:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:14.491 17:16:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:14.491 17:16:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:14.491 17:16:23 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:14.491 17:16:23 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:14.491 17:16:23 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:14.491 17:16:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:14.491 17:16:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:14.491 17:16:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:14.491 17:16:23 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:14.491 192.168.100.9' 00:12:14.491 17:16:23 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:14.491 192.168.100.9' 00:12:14.491 17:16:23 -- nvmf/common.sh@446 -- # head -n 1 00:12:14.491 17:16:23 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:14.491 17:16:23 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:14.491 192.168.100.9' 00:12:14.491 17:16:23 -- nvmf/common.sh@447 -- # tail -n +2 00:12:14.491 17:16:23 -- nvmf/common.sh@447 -- # head -n 1 00:12:14.491 17:16:23 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:14.491 17:16:23 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:14.491 17:16:23 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:14.491 17:16:23 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:14.491 17:16:23 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:14.491 17:16:23 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:14.491 17:16:23 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:14.491 17:16:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:14.491 17:16:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:14.491 17:16:23 -- common/autotest_common.sh@10 -- # set +x 00:12:14.491 17:16:23 -- nvmf/common.sh@470 -- # nvmfpid=3003247 00:12:14.491 17:16:23 -- nvmf/common.sh@471 -- # waitforlisten 3003247 00:12:14.491 17:16:23 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:14.491 17:16:23 -- common/autotest_common.sh@817 -- # '[' -z 3003247 ']' 00:12:14.491 17:16:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.491 17:16:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:14.491 17:16:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.491 17:16:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:14.491 17:16:23 -- common/autotest_common.sh@10 -- # set +x 00:12:14.491 [2024-04-24 17:16:23.198502] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:14.491 [2024-04-24 17:16:23.198549] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.491 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.491 [2024-04-24 17:16:23.254503] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.491 [2024-04-24 17:16:23.327449] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.492 [2024-04-24 17:16:23.327484] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.492 [2024-04-24 17:16:23.327491] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.492 [2024-04-24 17:16:23.327497] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.492 [2024-04-24 17:16:23.327502] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.492 [2024-04-24 17:16:23.327524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.750 17:16:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:14.750 17:16:23 -- common/autotest_common.sh@850 -- # return 0 00:12:14.750 17:16:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:14.750 17:16:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:14.750 17:16:23 -- common/autotest_common.sh@10 -- # set +x 00:12:15.009 17:16:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.009 17:16:24 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:15.009 17:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:15.009 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:12:15.009 [2024-04-24 17:16:24.045317] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbfb0a0/0xbff590) succeed. 00:12:15.009 [2024-04-24 17:16:24.054158] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbfc5a0/0xc40c20) succeed. 00:12:15.009 17:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:15.009 17:16:24 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:15.009 17:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:15.009 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:12:15.009 Malloc0 00:12:15.009 17:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:15.009 17:16:24 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:15.009 17:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:15.009 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:12:15.009 17:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:15.009 17:16:24 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:15.009 17:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:15.009 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:12:15.009 17:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:15.009 17:16:24 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:15.009 17:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:15.009 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:12:15.009 [2024-04-24 17:16:24.128464] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:15.009 17:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:15.009 17:16:24 -- target/queue_depth.sh@30 -- # bdevperf_pid=3003281 00:12:15.009 17:16:24 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:15.009 17:16:24 -- target/queue_depth.sh@33 -- # waitforlisten 3003281 /var/tmp/bdevperf.sock 00:12:15.009 17:16:24 -- common/autotest_common.sh@817 -- # '[' -z 3003281 ']' 00:12:15.009 17:16:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:15.009 17:16:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:15.009 17:16:24 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:15.009 17:16:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:15.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:15.009 17:16:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:15.009 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:12:15.009 [2024-04-24 17:16:24.175059] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:15.009 [2024-04-24 17:16:24.175102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3003281 ] 00:12:15.009 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.009 [2024-04-24 17:16:24.229561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.268 [2024-04-24 17:16:24.308164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.835 17:16:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:15.835 17:16:24 -- common/autotest_common.sh@850 -- # return 0 00:12:15.835 17:16:24 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:15.835 17:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:15.835 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:12:15.835 NVMe0n1 00:12:15.835 17:16:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:15.835 17:16:25 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:16.094 Running I/O for 10 seconds... 00:12:26.085 00:12:26.085 Latency(us) 00:12:26.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.085 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:26.085 Verification LBA range: start 0x0 length 0x4000 00:12:26.085 NVMe0n1 : 10.03 17859.21 69.76 0.00 0.00 57200.07 21595.67 37698.80 00:12:26.085 =================================================================================================================== 00:12:26.085 Total : 17859.21 69.76 0.00 0.00 57200.07 21595.67 37698.80 00:12:26.085 0 00:12:26.085 17:16:35 -- target/queue_depth.sh@39 -- # killprocess 3003281 00:12:26.085 17:16:35 -- common/autotest_common.sh@936 -- # '[' -z 3003281 ']' 00:12:26.085 17:16:35 -- common/autotest_common.sh@940 -- # kill -0 3003281 00:12:26.085 17:16:35 -- common/autotest_common.sh@941 -- # uname 00:12:26.085 17:16:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.085 17:16:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3003281 00:12:26.085 17:16:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:26.085 17:16:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:26.085 17:16:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3003281' 00:12:26.085 killing process with pid 3003281 00:12:26.085 17:16:35 -- common/autotest_common.sh@955 -- # kill 3003281 00:12:26.085 Received shutdown signal, test time was about 10.000000 seconds 00:12:26.086 00:12:26.086 Latency(us) 00:12:26.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.086 =================================================================================================================== 00:12:26.086 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:26.086 17:16:35 -- common/autotest_common.sh@960 -- # wait 3003281 00:12:26.344 17:16:35 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:26.344 17:16:35 -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:26.344 17:16:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:26.344 17:16:35 -- nvmf/common.sh@117 -- # sync 00:12:26.344 17:16:35 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:26.344 17:16:35 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:26.344 17:16:35 -- nvmf/common.sh@120 -- # set +e 00:12:26.344 17:16:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.344 17:16:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:26.344 rmmod nvme_rdma 00:12:26.344 rmmod nvme_fabrics 00:12:26.344 17:16:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.344 17:16:35 -- nvmf/common.sh@124 -- # set -e 00:12:26.344 17:16:35 -- nvmf/common.sh@125 -- # return 0 00:12:26.344 17:16:35 -- nvmf/common.sh@478 -- # '[' -n 3003247 ']' 00:12:26.344 17:16:35 -- nvmf/common.sh@479 -- # killprocess 3003247 00:12:26.344 17:16:35 -- common/autotest_common.sh@936 -- # '[' -z 3003247 ']' 00:12:26.344 17:16:35 -- common/autotest_common.sh@940 -- # kill -0 3003247 00:12:26.344 17:16:35 -- common/autotest_common.sh@941 -- # uname 00:12:26.344 17:16:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.344 17:16:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3003247 00:12:26.344 17:16:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:26.344 17:16:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:26.344 17:16:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3003247' 00:12:26.344 killing process with pid 3003247 00:12:26.344 17:16:35 -- common/autotest_common.sh@955 -- # kill 3003247 00:12:26.344 17:16:35 -- common/autotest_common.sh@960 -- # wait 3003247 00:12:26.602 17:16:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:26.602 17:16:35 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:26.602 00:12:26.602 real 0m17.693s 00:12:26.602 user 0m25.511s 00:12:26.602 sys 0m4.341s 00:12:26.602 17:16:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:26.603 17:16:35 -- common/autotest_common.sh@10 -- # set +x 00:12:26.603 ************************************ 00:12:26.603 END TEST nvmf_queue_depth 00:12:26.603 ************************************ 00:12:26.861 17:16:35 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:12:26.861 17:16:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:26.861 17:16:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:26.861 17:16:35 -- common/autotest_common.sh@10 -- # set +x 00:12:26.861 ************************************ 00:12:26.861 START TEST nvmf_multipath 00:12:26.861 ************************************ 00:12:26.861 17:16:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:12:26.861 * Looking for test storage... 00:12:26.861 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:26.861 17:16:36 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.861 17:16:36 -- nvmf/common.sh@7 -- # uname -s 00:12:26.861 17:16:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.861 17:16:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.861 17:16:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.861 17:16:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.861 17:16:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.861 17:16:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.861 17:16:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.861 17:16:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.861 17:16:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.861 17:16:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.861 17:16:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:26.861 17:16:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:26.861 17:16:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.861 17:16:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.861 17:16:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.861 17:16:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.861 17:16:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:26.861 17:16:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.861 17:16:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.861 17:16:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.861 17:16:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.862 17:16:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.862 17:16:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.862 17:16:36 -- paths/export.sh@5 -- # export PATH 00:12:26.862 17:16:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.862 17:16:36 -- nvmf/common.sh@47 -- # : 0 00:12:26.862 17:16:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:26.862 17:16:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:26.862 17:16:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.862 17:16:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.862 17:16:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.862 17:16:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:26.862 17:16:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:26.862 17:16:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:26.862 17:16:36 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:26.862 17:16:36 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:26.862 17:16:36 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:26.862 17:16:36 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:26.862 17:16:36 -- target/multipath.sh@43 -- # nvmftestinit 00:12:26.862 17:16:36 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:26.862 17:16:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.862 17:16:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:26.862 17:16:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:26.862 17:16:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:26.862 17:16:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.862 17:16:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.862 17:16:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.862 17:16:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:26.862 17:16:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:26.862 17:16:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:26.862 17:16:36 -- common/autotest_common.sh@10 -- # set +x 00:12:32.154 17:16:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:32.154 17:16:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:32.154 17:16:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:32.154 17:16:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:32.154 17:16:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:32.154 17:16:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:32.154 17:16:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:32.154 17:16:41 -- nvmf/common.sh@295 -- # net_devs=() 00:12:32.154 17:16:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:32.154 17:16:41 -- nvmf/common.sh@296 -- # e810=() 00:12:32.154 17:16:41 -- nvmf/common.sh@296 -- # local -ga e810 00:12:32.154 17:16:41 -- nvmf/common.sh@297 -- # x722=() 00:12:32.154 17:16:41 -- nvmf/common.sh@297 -- # local -ga x722 00:12:32.154 17:16:41 -- nvmf/common.sh@298 -- # mlx=() 00:12:32.154 17:16:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:32.154 17:16:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.154 17:16:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.154 17:16:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.154 17:16:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.154 17:16:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.154 17:16:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.154 17:16:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.154 17:16:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.154 17:16:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.154 17:16:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.154 17:16:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.154 17:16:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:32.154 17:16:41 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:32.154 17:16:41 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:32.154 17:16:41 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:32.154 17:16:41 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:32.154 17:16:41 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:32.154 17:16:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:32.154 17:16:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.154 17:16:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:32.154 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:32.154 17:16:41 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:32.154 17:16:41 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:32.154 17:16:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:32.155 17:16:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.155 17:16:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:32.155 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:32.155 17:16:41 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:32.155 17:16:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:32.155 17:16:41 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.155 17:16:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.155 17:16:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:32.155 17:16:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.155 17:16:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:32.155 Found net devices under 0000:da:00.0: mlx_0_0 00:12:32.155 17:16:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.155 17:16:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.155 17:16:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.155 17:16:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:32.155 17:16:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.155 17:16:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:32.155 Found net devices under 0000:da:00.1: mlx_0_1 00:12:32.155 17:16:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.155 17:16:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:32.155 17:16:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:32.155 17:16:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:32.155 17:16:41 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:32.155 17:16:41 -- nvmf/common.sh@58 -- # uname 00:12:32.155 17:16:41 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:32.155 17:16:41 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:32.155 17:16:41 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:32.155 17:16:41 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:32.155 17:16:41 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:32.155 17:16:41 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:32.155 17:16:41 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:32.155 17:16:41 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:32.155 17:16:41 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:32.155 17:16:41 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:32.155 17:16:41 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:32.155 17:16:41 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:32.155 17:16:41 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:32.155 17:16:41 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:32.155 17:16:41 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:32.155 17:16:41 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:32.155 17:16:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:32.155 17:16:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.155 17:16:41 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:32.155 17:16:41 -- nvmf/common.sh@105 -- # continue 2 00:12:32.155 17:16:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:32.155 17:16:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.155 17:16:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.155 17:16:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:32.155 17:16:41 -- nvmf/common.sh@105 -- # continue 2 00:12:32.155 17:16:41 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:32.155 17:16:41 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:32.155 17:16:41 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:32.155 17:16:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:32.155 17:16:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:32.155 17:16:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:32.155 17:16:41 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:32.155 17:16:41 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:32.155 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:32.155 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:32.155 altname enp218s0f0np0 00:12:32.155 altname ens818f0np0 00:12:32.155 inet 192.168.100.8/24 scope global mlx_0_0 00:12:32.155 valid_lft forever preferred_lft forever 00:12:32.155 17:16:41 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:32.155 17:16:41 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:32.155 17:16:41 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:32.155 17:16:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:32.155 17:16:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:32.155 17:16:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:32.155 17:16:41 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:32.155 17:16:41 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:32.155 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:32.155 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:32.155 altname enp218s0f1np1 00:12:32.155 altname ens818f1np1 00:12:32.155 inet 192.168.100.9/24 scope global mlx_0_1 00:12:32.155 valid_lft forever preferred_lft forever 00:12:32.155 17:16:41 -- nvmf/common.sh@411 -- # return 0 00:12:32.155 17:16:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:32.155 17:16:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:32.155 17:16:41 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:32.155 17:16:41 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:32.155 17:16:41 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:32.155 17:16:41 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:32.155 17:16:41 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:32.155 17:16:41 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:32.155 17:16:41 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:32.155 17:16:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:32.155 17:16:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.155 17:16:41 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:32.155 17:16:41 -- nvmf/common.sh@105 -- # continue 2 00:12:32.155 17:16:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:32.155 17:16:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.155 17:16:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.155 17:16:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:32.155 17:16:41 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:32.155 17:16:41 -- nvmf/common.sh@105 -- # continue 2 00:12:32.155 17:16:41 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:32.155 17:16:41 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:32.155 17:16:41 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:32.155 17:16:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:32.155 17:16:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:32.155 17:16:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:32.155 17:16:41 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:32.155 17:16:41 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:32.155 17:16:41 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:32.155 17:16:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:32.155 17:16:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:32.155 17:16:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:32.155 17:16:41 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:32.155 192.168.100.9' 00:12:32.155 17:16:41 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:32.155 192.168.100.9' 00:12:32.155 17:16:41 -- nvmf/common.sh@446 -- # head -n 1 00:12:32.155 17:16:41 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:32.155 17:16:41 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:32.155 192.168.100.9' 00:12:32.155 17:16:41 -- nvmf/common.sh@447 -- # tail -n +2 00:12:32.155 17:16:41 -- nvmf/common.sh@447 -- # head -n 1 00:12:32.155 17:16:41 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:32.155 17:16:41 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:32.155 17:16:41 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:32.155 17:16:41 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:32.155 17:16:41 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:32.155 17:16:41 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:32.155 17:16:41 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:12:32.155 17:16:41 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:12:32.155 17:16:41 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:12:32.155 run this test only with TCP transport for now 00:12:32.155 17:16:41 -- target/multipath.sh@53 -- # nvmftestfini 00:12:32.155 17:16:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:32.155 17:16:41 -- nvmf/common.sh@117 -- # sync 00:12:32.155 17:16:41 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:32.155 17:16:41 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:32.155 17:16:41 -- nvmf/common.sh@120 -- # set +e 00:12:32.155 17:16:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:32.155 17:16:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:32.155 rmmod nvme_rdma 00:12:32.155 rmmod nvme_fabrics 00:12:32.155 17:16:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:32.155 17:16:41 -- nvmf/common.sh@124 -- # set -e 00:12:32.155 17:16:41 -- nvmf/common.sh@125 -- # return 0 00:12:32.155 17:16:41 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:32.155 17:16:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:32.155 17:16:41 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:32.156 17:16:41 -- target/multipath.sh@54 -- # exit 0 00:12:32.156 17:16:41 -- target/multipath.sh@1 -- # nvmftestfini 00:12:32.156 17:16:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:32.156 17:16:41 -- nvmf/common.sh@117 -- # sync 00:12:32.156 17:16:41 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:32.156 17:16:41 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:32.156 17:16:41 -- nvmf/common.sh@120 -- # set +e 00:12:32.156 17:16:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:32.156 17:16:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:32.156 17:16:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:32.156 17:16:41 -- nvmf/common.sh@124 -- # set -e 00:12:32.156 17:16:41 -- nvmf/common.sh@125 -- # return 0 00:12:32.156 17:16:41 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:32.156 17:16:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:32.156 17:16:41 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:32.156 00:12:32.156 real 0m5.304s 00:12:32.156 user 0m1.470s 00:12:32.156 sys 0m3.959s 00:12:32.156 17:16:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:32.156 17:16:41 -- common/autotest_common.sh@10 -- # set +x 00:12:32.156 ************************************ 00:12:32.156 END TEST nvmf_multipath 00:12:32.156 ************************************ 00:12:32.156 17:16:41 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:12:32.156 17:16:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:32.156 17:16:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:32.156 17:16:41 -- common/autotest_common.sh@10 -- # set +x 00:12:32.415 ************************************ 00:12:32.415 START TEST nvmf_zcopy 00:12:32.415 ************************************ 00:12:32.415 17:16:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:12:32.415 * Looking for test storage... 00:12:32.415 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:32.415 17:16:41 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.415 17:16:41 -- nvmf/common.sh@7 -- # uname -s 00:12:32.415 17:16:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.415 17:16:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.415 17:16:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.415 17:16:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.415 17:16:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.415 17:16:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.415 17:16:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.415 17:16:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.415 17:16:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.415 17:16:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.415 17:16:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:32.415 17:16:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:32.415 17:16:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.415 17:16:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.415 17:16:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.415 17:16:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.415 17:16:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:32.415 17:16:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.415 17:16:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.415 17:16:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.415 17:16:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.415 17:16:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.415 17:16:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.415 17:16:41 -- paths/export.sh@5 -- # export PATH 00:12:32.415 17:16:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.415 17:16:41 -- nvmf/common.sh@47 -- # : 0 00:12:32.415 17:16:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.415 17:16:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.415 17:16:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.415 17:16:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.415 17:16:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.415 17:16:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.415 17:16:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.415 17:16:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.415 17:16:41 -- target/zcopy.sh@12 -- # nvmftestinit 00:12:32.415 17:16:41 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:32.415 17:16:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.415 17:16:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:32.415 17:16:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:32.415 17:16:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:32.415 17:16:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.415 17:16:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.415 17:16:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.415 17:16:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:32.415 17:16:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:32.415 17:16:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:32.415 17:16:41 -- common/autotest_common.sh@10 -- # set +x 00:12:37.789 17:16:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:37.789 17:16:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:37.789 17:16:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:37.789 17:16:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:37.789 17:16:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:37.789 17:16:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:37.789 17:16:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:37.789 17:16:46 -- nvmf/common.sh@295 -- # net_devs=() 00:12:37.789 17:16:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:37.789 17:16:46 -- nvmf/common.sh@296 -- # e810=() 00:12:37.789 17:16:46 -- nvmf/common.sh@296 -- # local -ga e810 00:12:37.789 17:16:46 -- nvmf/common.sh@297 -- # x722=() 00:12:37.790 17:16:46 -- nvmf/common.sh@297 -- # local -ga x722 00:12:37.790 17:16:46 -- nvmf/common.sh@298 -- # mlx=() 00:12:37.790 17:16:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:37.790 17:16:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.790 17:16:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.790 17:16:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.790 17:16:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.790 17:16:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.790 17:16:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.790 17:16:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.790 17:16:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.790 17:16:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.790 17:16:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.790 17:16:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.790 17:16:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:37.790 17:16:46 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:37.790 17:16:46 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:37.790 17:16:46 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:37.790 17:16:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:37.790 17:16:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.790 17:16:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:37.790 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:37.790 17:16:46 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:37.790 17:16:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.790 17:16:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:37.790 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:37.790 17:16:46 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:37.790 17:16:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:37.790 17:16:46 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.790 17:16:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.790 17:16:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:37.790 17:16:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.790 17:16:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:37.790 Found net devices under 0000:da:00.0: mlx_0_0 00:12:37.790 17:16:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.790 17:16:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.790 17:16:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.790 17:16:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:37.790 17:16:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.790 17:16:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:37.790 Found net devices under 0000:da:00.1: mlx_0_1 00:12:37.790 17:16:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.790 17:16:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:37.790 17:16:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:37.790 17:16:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:37.790 17:16:46 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:37.790 17:16:46 -- nvmf/common.sh@58 -- # uname 00:12:37.790 17:16:46 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:37.790 17:16:46 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:37.790 17:16:46 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:37.790 17:16:46 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:37.790 17:16:46 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:37.790 17:16:46 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:37.790 17:16:46 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:37.790 17:16:46 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:37.790 17:16:46 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:37.790 17:16:46 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:37.790 17:16:46 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:37.790 17:16:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:37.790 17:16:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:37.790 17:16:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:37.790 17:16:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:37.790 17:16:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:37.790 17:16:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.790 17:16:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.790 17:16:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:37.790 17:16:46 -- nvmf/common.sh@105 -- # continue 2 00:12:37.790 17:16:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.790 17:16:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.790 17:16:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.790 17:16:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:37.790 17:16:46 -- nvmf/common.sh@105 -- # continue 2 00:12:37.790 17:16:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:37.790 17:16:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:37.790 17:16:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:37.790 17:16:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:37.790 17:16:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.790 17:16:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.790 17:16:46 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:37.790 17:16:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:37.790 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:37.790 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:37.790 altname enp218s0f0np0 00:12:37.790 altname ens818f0np0 00:12:37.790 inet 192.168.100.8/24 scope global mlx_0_0 00:12:37.790 valid_lft forever preferred_lft forever 00:12:37.790 17:16:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:37.790 17:16:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:37.790 17:16:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:37.790 17:16:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:37.790 17:16:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.790 17:16:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.790 17:16:46 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:37.790 17:16:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:37.790 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:37.790 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:37.790 altname enp218s0f1np1 00:12:37.790 altname ens818f1np1 00:12:37.790 inet 192.168.100.9/24 scope global mlx_0_1 00:12:37.790 valid_lft forever preferred_lft forever 00:12:37.790 17:16:46 -- nvmf/common.sh@411 -- # return 0 00:12:37.790 17:16:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:37.790 17:16:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:37.790 17:16:46 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:37.790 17:16:46 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:37.790 17:16:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:37.790 17:16:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:37.790 17:16:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:37.790 17:16:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:37.790 17:16:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:37.790 17:16:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.790 17:16:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.790 17:16:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:37.790 17:16:46 -- nvmf/common.sh@105 -- # continue 2 00:12:37.790 17:16:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.790 17:16:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.790 17:16:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.790 17:16:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:37.790 17:16:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:37.790 17:16:46 -- nvmf/common.sh@105 -- # continue 2 00:12:37.790 17:16:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:37.790 17:16:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:37.790 17:16:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:37.790 17:16:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:37.790 17:16:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.790 17:16:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.790 17:16:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:37.790 17:16:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:37.790 17:16:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:37.790 17:16:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:37.790 17:16:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.790 17:16:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.790 17:16:46 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:37.790 192.168.100.9' 00:12:37.790 17:16:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:37.790 192.168.100.9' 00:12:37.790 17:16:46 -- nvmf/common.sh@446 -- # head -n 1 00:12:37.791 17:16:46 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:37.791 17:16:46 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:37.791 192.168.100.9' 00:12:37.791 17:16:46 -- nvmf/common.sh@447 -- # tail -n +2 00:12:37.791 17:16:46 -- nvmf/common.sh@447 -- # head -n 1 00:12:37.791 17:16:46 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:37.791 17:16:46 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:37.791 17:16:46 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:37.791 17:16:46 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:37.791 17:16:46 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:37.791 17:16:46 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:37.791 17:16:46 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:37.791 17:16:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:37.791 17:16:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:37.791 17:16:46 -- common/autotest_common.sh@10 -- # set +x 00:12:37.791 17:16:46 -- nvmf/common.sh@470 -- # nvmfpid=3007867 00:12:37.791 17:16:46 -- nvmf/common.sh@471 -- # waitforlisten 3007867 00:12:37.791 17:16:46 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:37.791 17:16:46 -- common/autotest_common.sh@817 -- # '[' -z 3007867 ']' 00:12:37.791 17:16:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.791 17:16:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:37.791 17:16:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.791 17:16:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:37.791 17:16:46 -- common/autotest_common.sh@10 -- # set +x 00:12:37.791 [2024-04-24 17:16:46.702608] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:37.791 [2024-04-24 17:16:46.702655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.791 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.791 [2024-04-24 17:16:46.759595] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.791 [2024-04-24 17:16:46.832654] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.791 [2024-04-24 17:16:46.832695] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.791 [2024-04-24 17:16:46.832702] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.791 [2024-04-24 17:16:46.832707] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.791 [2024-04-24 17:16:46.832711] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.791 [2024-04-24 17:16:46.832733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.359 17:16:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:38.359 17:16:47 -- common/autotest_common.sh@850 -- # return 0 00:12:38.359 17:16:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:38.359 17:16:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:38.359 17:16:47 -- common/autotest_common.sh@10 -- # set +x 00:12:38.359 17:16:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.359 17:16:47 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:12:38.359 17:16:47 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:12:38.359 Unsupported transport: rdma 00:12:38.359 17:16:47 -- target/zcopy.sh@17 -- # exit 0 00:12:38.359 17:16:47 -- target/zcopy.sh@1 -- # process_shm --id 0 00:12:38.359 17:16:47 -- common/autotest_common.sh@794 -- # type=--id 00:12:38.359 17:16:47 -- common/autotest_common.sh@795 -- # id=0 00:12:38.359 17:16:47 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:12:38.359 17:16:47 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:38.359 17:16:47 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:12:38.359 17:16:47 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:12:38.359 17:16:47 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:12:38.359 17:16:47 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:38.359 nvmf_trace.0 00:12:38.359 17:16:47 -- common/autotest_common.sh@809 -- # return 0 00:12:38.359 17:16:47 -- target/zcopy.sh@1 -- # nvmftestfini 00:12:38.359 17:16:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:38.359 17:16:47 -- nvmf/common.sh@117 -- # sync 00:12:38.359 17:16:47 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:38.359 17:16:47 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:38.359 17:16:47 -- nvmf/common.sh@120 -- # set +e 00:12:38.359 17:16:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:38.359 17:16:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:38.359 rmmod nvme_rdma 00:12:38.359 rmmod nvme_fabrics 00:12:38.359 17:16:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:38.619 17:16:47 -- nvmf/common.sh@124 -- # set -e 00:12:38.619 17:16:47 -- nvmf/common.sh@125 -- # return 0 00:12:38.619 17:16:47 -- nvmf/common.sh@478 -- # '[' -n 3007867 ']' 00:12:38.619 17:16:47 -- nvmf/common.sh@479 -- # killprocess 3007867 00:12:38.619 17:16:47 -- common/autotest_common.sh@936 -- # '[' -z 3007867 ']' 00:12:38.619 17:16:47 -- common/autotest_common.sh@940 -- # kill -0 3007867 00:12:38.619 17:16:47 -- common/autotest_common.sh@941 -- # uname 00:12:38.619 17:16:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:38.619 17:16:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3007867 00:12:38.619 17:16:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:38.619 17:16:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:38.619 17:16:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3007867' 00:12:38.619 killing process with pid 3007867 00:12:38.619 17:16:47 -- common/autotest_common.sh@955 -- # kill 3007867 00:12:38.619 17:16:47 -- common/autotest_common.sh@960 -- # wait 3007867 00:12:38.619 17:16:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:38.619 17:16:47 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:38.619 00:12:38.619 real 0m6.426s 00:12:38.619 user 0m2.783s 00:12:38.619 sys 0m4.163s 00:12:38.619 17:16:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:38.619 17:16:47 -- common/autotest_common.sh@10 -- # set +x 00:12:38.619 ************************************ 00:12:38.619 END TEST nvmf_zcopy 00:12:38.619 ************************************ 00:12:38.878 17:16:47 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:12:38.878 17:16:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:38.878 17:16:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:38.878 17:16:47 -- common/autotest_common.sh@10 -- # set +x 00:12:38.878 ************************************ 00:12:38.878 START TEST nvmf_nmic 00:12:38.878 ************************************ 00:12:38.878 17:16:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:12:38.878 * Looking for test storage... 00:12:38.878 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:38.878 17:16:48 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.878 17:16:48 -- nvmf/common.sh@7 -- # uname -s 00:12:38.878 17:16:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.878 17:16:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.878 17:16:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.878 17:16:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.878 17:16:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.878 17:16:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.878 17:16:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.878 17:16:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.878 17:16:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.878 17:16:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.878 17:16:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:38.878 17:16:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:38.878 17:16:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.878 17:16:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.878 17:16:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.878 17:16:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.878 17:16:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:38.878 17:16:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.878 17:16:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.878 17:16:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.878 17:16:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.878 17:16:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.878 17:16:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.878 17:16:48 -- paths/export.sh@5 -- # export PATH 00:12:38.878 17:16:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.878 17:16:48 -- nvmf/common.sh@47 -- # : 0 00:12:38.879 17:16:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:38.879 17:16:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:38.879 17:16:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.879 17:16:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.879 17:16:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.879 17:16:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:38.879 17:16:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:38.879 17:16:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:38.879 17:16:48 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:38.879 17:16:48 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:38.879 17:16:48 -- target/nmic.sh@14 -- # nvmftestinit 00:12:38.879 17:16:48 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:38.879 17:16:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.879 17:16:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:38.879 17:16:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:38.879 17:16:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:38.879 17:16:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.879 17:16:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.879 17:16:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.879 17:16:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:38.879 17:16:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:38.879 17:16:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:38.879 17:16:48 -- common/autotest_common.sh@10 -- # set +x 00:12:44.153 17:16:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:44.153 17:16:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:44.153 17:16:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:44.153 17:16:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:44.153 17:16:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:44.153 17:16:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:44.153 17:16:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:44.153 17:16:53 -- nvmf/common.sh@295 -- # net_devs=() 00:12:44.153 17:16:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:44.153 17:16:53 -- nvmf/common.sh@296 -- # e810=() 00:12:44.153 17:16:53 -- nvmf/common.sh@296 -- # local -ga e810 00:12:44.153 17:16:53 -- nvmf/common.sh@297 -- # x722=() 00:12:44.153 17:16:53 -- nvmf/common.sh@297 -- # local -ga x722 00:12:44.153 17:16:53 -- nvmf/common.sh@298 -- # mlx=() 00:12:44.153 17:16:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:44.153 17:16:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.153 17:16:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.153 17:16:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.153 17:16:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.153 17:16:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.153 17:16:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.153 17:16:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.153 17:16:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.153 17:16:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.153 17:16:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.153 17:16:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.153 17:16:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:44.153 17:16:53 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:44.153 17:16:53 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:44.153 17:16:53 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:44.154 17:16:53 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:44.154 17:16:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:44.154 17:16:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:44.154 17:16:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:44.154 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:44.154 17:16:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:44.154 17:16:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:44.154 17:16:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:44.154 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:44.154 17:16:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:44.154 17:16:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:44.154 17:16:53 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:44.154 17:16:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.154 17:16:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:44.154 17:16:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.154 17:16:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:44.154 Found net devices under 0000:da:00.0: mlx_0_0 00:12:44.154 17:16:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.154 17:16:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:44.154 17:16:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.154 17:16:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:44.154 17:16:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.154 17:16:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:44.154 Found net devices under 0000:da:00.1: mlx_0_1 00:12:44.154 17:16:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.154 17:16:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:44.154 17:16:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:44.154 17:16:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:44.154 17:16:53 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:44.154 17:16:53 -- nvmf/common.sh@58 -- # uname 00:12:44.154 17:16:53 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:44.154 17:16:53 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:44.154 17:16:53 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:44.154 17:16:53 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:44.154 17:16:53 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:44.154 17:16:53 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:44.154 17:16:53 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:44.154 17:16:53 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:44.154 17:16:53 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:44.154 17:16:53 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:44.154 17:16:53 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:44.154 17:16:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:44.154 17:16:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:44.154 17:16:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:44.154 17:16:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:44.154 17:16:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:44.154 17:16:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:44.154 17:16:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.154 17:16:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:44.154 17:16:53 -- nvmf/common.sh@105 -- # continue 2 00:12:44.154 17:16:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:44.154 17:16:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.154 17:16:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.154 17:16:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:44.154 17:16:53 -- nvmf/common.sh@105 -- # continue 2 00:12:44.154 17:16:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:44.154 17:16:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:44.154 17:16:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:44.154 17:16:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:44.154 17:16:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:44.154 17:16:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:44.154 17:16:53 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:44.154 17:16:53 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:44.154 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:44.154 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:44.154 altname enp218s0f0np0 00:12:44.154 altname ens818f0np0 00:12:44.154 inet 192.168.100.8/24 scope global mlx_0_0 00:12:44.154 valid_lft forever preferred_lft forever 00:12:44.154 17:16:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:44.154 17:16:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:44.154 17:16:53 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:44.154 17:16:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:44.154 17:16:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:44.154 17:16:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:44.154 17:16:53 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:44.154 17:16:53 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:44.154 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:44.154 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:44.154 altname enp218s0f1np1 00:12:44.154 altname ens818f1np1 00:12:44.154 inet 192.168.100.9/24 scope global mlx_0_1 00:12:44.154 valid_lft forever preferred_lft forever 00:12:44.154 17:16:53 -- nvmf/common.sh@411 -- # return 0 00:12:44.154 17:16:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:44.154 17:16:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:44.154 17:16:53 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:44.154 17:16:53 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:44.154 17:16:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:44.154 17:16:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:44.154 17:16:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:44.154 17:16:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:44.154 17:16:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:44.154 17:16:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:44.154 17:16:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.154 17:16:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:44.154 17:16:53 -- nvmf/common.sh@105 -- # continue 2 00:12:44.154 17:16:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:44.154 17:16:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.154 17:16:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.154 17:16:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:44.154 17:16:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:44.154 17:16:53 -- nvmf/common.sh@105 -- # continue 2 00:12:44.154 17:16:53 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:44.154 17:16:53 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:44.154 17:16:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:44.154 17:16:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:44.154 17:16:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:44.154 17:16:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:44.154 17:16:53 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:44.154 17:16:53 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:44.154 17:16:53 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:44.154 17:16:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:44.154 17:16:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:44.154 17:16:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:44.154 17:16:53 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:44.154 192.168.100.9' 00:12:44.154 17:16:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:44.154 192.168.100.9' 00:12:44.154 17:16:53 -- nvmf/common.sh@446 -- # head -n 1 00:12:44.154 17:16:53 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:44.154 17:16:53 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:44.154 192.168.100.9' 00:12:44.154 17:16:53 -- nvmf/common.sh@447 -- # tail -n +2 00:12:44.154 17:16:53 -- nvmf/common.sh@447 -- # head -n 1 00:12:44.154 17:16:53 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:44.154 17:16:53 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:44.154 17:16:53 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:44.154 17:16:53 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:44.154 17:16:53 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:44.154 17:16:53 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:44.154 17:16:53 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:44.154 17:16:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:44.154 17:16:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:44.155 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:12:44.155 17:16:53 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.155 17:16:53 -- nvmf/common.sh@470 -- # nvmfpid=3010124 00:12:44.155 17:16:53 -- nvmf/common.sh@471 -- # waitforlisten 3010124 00:12:44.155 17:16:53 -- common/autotest_common.sh@817 -- # '[' -z 3010124 ']' 00:12:44.155 17:16:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.155 17:16:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:44.155 17:16:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.155 17:16:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:44.155 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:12:44.414 [2024-04-24 17:16:53.418776] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:44.414 [2024-04-24 17:16:53.418818] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.414 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.414 [2024-04-24 17:16:53.475957] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.414 [2024-04-24 17:16:53.555042] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.414 [2024-04-24 17:16:53.555079] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.414 [2024-04-24 17:16:53.555086] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.414 [2024-04-24 17:16:53.555092] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.414 [2024-04-24 17:16:53.555097] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.414 [2024-04-24 17:16:53.555131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.414 [2024-04-24 17:16:53.555228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.414 [2024-04-24 17:16:53.555441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.414 [2024-04-24 17:16:53.555443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.007 17:16:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:45.007 17:16:54 -- common/autotest_common.sh@850 -- # return 0 00:12:45.007 17:16:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:45.007 17:16:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:45.007 17:16:54 -- common/autotest_common.sh@10 -- # set +x 00:12:45.267 17:16:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.267 17:16:54 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:45.267 17:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.267 17:16:54 -- common/autotest_common.sh@10 -- # set +x 00:12:45.267 [2024-04-24 17:16:54.295499] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c5ff60/0x1c64450) succeed. 00:12:45.267 [2024-04-24 17:16:54.306012] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c61550/0x1ca5ae0) succeed. 00:12:45.267 17:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.267 17:16:54 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:45.267 17:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.267 17:16:54 -- common/autotest_common.sh@10 -- # set +x 00:12:45.267 Malloc0 00:12:45.267 17:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.267 17:16:54 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:45.267 17:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.267 17:16:54 -- common/autotest_common.sh@10 -- # set +x 00:12:45.267 17:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.267 17:16:54 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:45.267 17:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.267 17:16:54 -- common/autotest_common.sh@10 -- # set +x 00:12:45.267 17:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.267 17:16:54 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:45.267 17:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.267 17:16:54 -- common/autotest_common.sh@10 -- # set +x 00:12:45.267 [2024-04-24 17:16:54.476788] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:45.267 17:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.267 17:16:54 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:45.267 test case1: single bdev can't be used in multiple subsystems 00:12:45.267 17:16:54 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:45.267 17:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.267 17:16:54 -- common/autotest_common.sh@10 -- # set +x 00:12:45.267 17:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.267 17:16:54 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:12:45.267 17:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.267 17:16:54 -- common/autotest_common.sh@10 -- # set +x 00:12:45.267 17:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.267 17:16:54 -- target/nmic.sh@28 -- # nmic_status=0 00:12:45.267 17:16:54 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:45.267 17:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.267 17:16:54 -- common/autotest_common.sh@10 -- # set +x 00:12:45.267 [2024-04-24 17:16:54.500551] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:45.267 [2024-04-24 17:16:54.500567] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:45.267 [2024-04-24 17:16:54.500574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.267 request: 00:12:45.267 { 00:12:45.267 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:45.267 "namespace": { 00:12:45.267 "bdev_name": "Malloc0", 00:12:45.267 "no_auto_visible": false 00:12:45.267 }, 00:12:45.267 "method": "nvmf_subsystem_add_ns", 00:12:45.267 "req_id": 1 00:12:45.267 } 00:12:45.267 Got JSON-RPC error response 00:12:45.267 response: 00:12:45.267 { 00:12:45.267 "code": -32602, 00:12:45.267 "message": "Invalid parameters" 00:12:45.267 } 00:12:45.267 17:16:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:12:45.267 17:16:54 -- target/nmic.sh@29 -- # nmic_status=1 00:12:45.267 17:16:54 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:45.267 17:16:54 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:45.267 Adding namespace failed - expected result. 00:12:45.267 17:16:54 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:45.267 test case2: host connect to nvmf target in multiple paths 00:12:45.267 17:16:54 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:12:45.267 17:16:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.267 17:16:54 -- common/autotest_common.sh@10 -- # set +x 00:12:45.267 [2024-04-24 17:16:54.512609] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:12:45.527 17:16:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.527 17:16:54 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:46.462 17:16:55 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:12:47.398 17:16:56 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.398 17:16:56 -- common/autotest_common.sh@1184 -- # local i=0 00:12:47.398 17:16:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.398 17:16:56 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:47.398 17:16:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:49.300 17:16:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:49.300 17:16:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:49.300 17:16:58 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.300 17:16:58 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:49.300 17:16:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.300 17:16:58 -- common/autotest_common.sh@1194 -- # return 0 00:12:49.300 17:16:58 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:49.300 [global] 00:12:49.300 thread=1 00:12:49.300 invalidate=1 00:12:49.300 rw=write 00:12:49.300 time_based=1 00:12:49.300 runtime=1 00:12:49.300 ioengine=libaio 00:12:49.300 direct=1 00:12:49.300 bs=4096 00:12:49.300 iodepth=1 00:12:49.300 norandommap=0 00:12:49.300 numjobs=1 00:12:49.300 00:12:49.300 verify_dump=1 00:12:49.300 verify_backlog=512 00:12:49.300 verify_state_save=0 00:12:49.300 do_verify=1 00:12:49.300 verify=crc32c-intel 00:12:49.300 [job0] 00:12:49.300 filename=/dev/nvme0n1 00:12:49.300 Could not set queue depth (nvme0n1) 00:12:49.558 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:49.558 fio-3.35 00:12:49.558 Starting 1 thread 00:12:50.932 00:12:50.932 job0: (groupid=0, jobs=1): err= 0: pid=3010338: Wed Apr 24 17:16:59 2024 00:12:50.932 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:12:50.932 slat (nsec): min=6260, max=26634, avg=6973.86, stdev=674.90 00:12:50.932 clat (usec): min=41, max=236, avg=60.88, stdev= 5.09 00:12:50.932 lat (usec): min=56, max=243, avg=67.86, stdev= 5.13 00:12:50.932 clat percentiles (usec): 00:12:50.932 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 57], 00:12:50.932 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 63], 00:12:50.932 | 70.00th=[ 64], 80.00th=[ 65], 90.00th=[ 68], 95.00th=[ 69], 00:12:50.932 | 99.00th=[ 73], 99.50th=[ 74], 99.90th=[ 77], 99.95th=[ 79], 00:12:50.932 | 99.99th=[ 237] 00:12:50.933 write: IOPS=7297, BW=28.5MiB/s (29.9MB/s)(28.5MiB/1001msec); 0 zone resets 00:12:50.933 slat (nsec): min=7930, max=40195, avg=8843.00, stdev=987.72 00:12:50.933 clat (usec): min=41, max=200, avg=57.87, stdev= 4.99 00:12:50.933 lat (usec): min=55, max=208, avg=66.71, stdev= 5.09 00:12:50.933 clat percentiles (usec): 00:12:50.933 | 1.00th=[ 49], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 55], 00:12:50.933 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 60], 00:12:50.933 | 70.00th=[ 61], 80.00th=[ 63], 90.00th=[ 65], 95.00th=[ 67], 00:12:50.933 | 99.00th=[ 70], 99.50th=[ 71], 99.90th=[ 75], 99.95th=[ 78], 00:12:50.933 | 99.99th=[ 200] 00:12:50.933 bw ( KiB/s): min=29152, max=29152, per=99.87%, avg=29152.00, stdev= 0.00, samples=1 00:12:50.933 iops : min= 7288, max= 7288, avg=7288.00, stdev= 0.00, samples=1 00:12:50.933 lat (usec) : 50=2.13%, 100=97.86%, 250=0.01% 00:12:50.933 cpu : usr=8.20%, sys=14.70%, ctx=14473, majf=0, minf=2 00:12:50.933 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:50.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.933 issued rwts: total=7168,7305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.933 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:50.933 00:12:50.933 Run status group 0 (all jobs): 00:12:50.933 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:12:50.933 WRITE: bw=28.5MiB/s (29.9MB/s), 28.5MiB/s-28.5MiB/s (29.9MB/s-29.9MB/s), io=28.5MiB (29.9MB), run=1001-1001msec 00:12:50.933 00:12:50.933 Disk stats (read/write): 00:12:50.933 nvme0n1: ios=6397/6656, merge=0/0, ticks=341/342, in_queue=683, util=90.78% 00:12:50.933 17:16:59 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:52.834 17:17:01 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.834 17:17:01 -- common/autotest_common.sh@1205 -- # local i=0 00:12:52.834 17:17:01 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:52.834 17:17:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.834 17:17:01 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.834 17:17:01 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:52.834 17:17:01 -- common/autotest_common.sh@1217 -- # return 0 00:12:52.834 17:17:01 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:52.834 17:17:01 -- target/nmic.sh@53 -- # nvmftestfini 00:12:52.834 17:17:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:52.834 17:17:01 -- nvmf/common.sh@117 -- # sync 00:12:52.834 17:17:01 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:52.834 17:17:01 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:52.834 17:17:01 -- nvmf/common.sh@120 -- # set +e 00:12:52.834 17:17:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.834 17:17:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:52.834 rmmod nvme_rdma 00:12:52.834 rmmod nvme_fabrics 00:12:52.834 17:17:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.834 17:17:01 -- nvmf/common.sh@124 -- # set -e 00:12:52.834 17:17:01 -- nvmf/common.sh@125 -- # return 0 00:12:52.834 17:17:01 -- nvmf/common.sh@478 -- # '[' -n 3010124 ']' 00:12:52.834 17:17:01 -- nvmf/common.sh@479 -- # killprocess 3010124 00:12:52.834 17:17:01 -- common/autotest_common.sh@936 -- # '[' -z 3010124 ']' 00:12:52.834 17:17:01 -- common/autotest_common.sh@940 -- # kill -0 3010124 00:12:52.834 17:17:01 -- common/autotest_common.sh@941 -- # uname 00:12:52.834 17:17:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:52.834 17:17:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3010124 00:12:52.834 17:17:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:52.834 17:17:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:52.834 17:17:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3010124' 00:12:52.834 killing process with pid 3010124 00:12:52.834 17:17:01 -- common/autotest_common.sh@955 -- # kill 3010124 00:12:52.834 17:17:01 -- common/autotest_common.sh@960 -- # wait 3010124 00:12:53.092 17:17:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:53.092 17:17:02 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:53.092 00:12:53.092 real 0m14.308s 00:12:53.092 user 0m42.093s 00:12:53.092 sys 0m4.763s 00:12:53.092 17:17:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:53.092 17:17:02 -- common/autotest_common.sh@10 -- # set +x 00:12:53.092 ************************************ 00:12:53.092 END TEST nvmf_nmic 00:12:53.092 ************************************ 00:12:53.092 17:17:02 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:12:53.092 17:17:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:53.092 17:17:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:53.092 17:17:02 -- common/autotest_common.sh@10 -- # set +x 00:12:53.351 ************************************ 00:12:53.351 START TEST nvmf_fio_target 00:12:53.351 ************************************ 00:12:53.351 17:17:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:12:53.351 * Looking for test storage... 00:12:53.351 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:53.351 17:17:02 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.351 17:17:02 -- nvmf/common.sh@7 -- # uname -s 00:12:53.351 17:17:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.351 17:17:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.351 17:17:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.351 17:17:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.351 17:17:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.351 17:17:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.351 17:17:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.351 17:17:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.351 17:17:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.351 17:17:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.351 17:17:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:53.351 17:17:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:53.351 17:17:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.351 17:17:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.351 17:17:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.351 17:17:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.351 17:17:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:53.351 17:17:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.351 17:17:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.351 17:17:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.351 17:17:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.351 17:17:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.351 17:17:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.351 17:17:02 -- paths/export.sh@5 -- # export PATH 00:12:53.352 17:17:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.352 17:17:02 -- nvmf/common.sh@47 -- # : 0 00:12:53.352 17:17:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:53.352 17:17:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:53.352 17:17:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.352 17:17:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.352 17:17:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.352 17:17:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:53.352 17:17:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:53.352 17:17:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:53.352 17:17:02 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:53.352 17:17:02 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:53.352 17:17:02 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:53.352 17:17:02 -- target/fio.sh@16 -- # nvmftestinit 00:12:53.352 17:17:02 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:53.352 17:17:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.352 17:17:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:53.352 17:17:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:53.352 17:17:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:53.352 17:17:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.352 17:17:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.352 17:17:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.352 17:17:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:53.352 17:17:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:53.352 17:17:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:53.352 17:17:02 -- common/autotest_common.sh@10 -- # set +x 00:12:58.627 17:17:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:58.627 17:17:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:58.628 17:17:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:58.628 17:17:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:58.628 17:17:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:58.628 17:17:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:58.628 17:17:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:58.628 17:17:07 -- nvmf/common.sh@295 -- # net_devs=() 00:12:58.628 17:17:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:58.628 17:17:07 -- nvmf/common.sh@296 -- # e810=() 00:12:58.628 17:17:07 -- nvmf/common.sh@296 -- # local -ga e810 00:12:58.628 17:17:07 -- nvmf/common.sh@297 -- # x722=() 00:12:58.628 17:17:07 -- nvmf/common.sh@297 -- # local -ga x722 00:12:58.628 17:17:07 -- nvmf/common.sh@298 -- # mlx=() 00:12:58.628 17:17:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:58.628 17:17:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:58.628 17:17:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:58.628 17:17:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:58.628 17:17:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:58.628 17:17:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:58.628 17:17:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:58.628 17:17:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:58.628 17:17:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:58.628 17:17:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:58.628 17:17:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:58.628 17:17:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:58.628 17:17:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:58.628 17:17:07 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:58.628 17:17:07 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:58.628 17:17:07 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:58.628 17:17:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:58.628 17:17:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:58.628 17:17:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:58.628 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:58.628 17:17:07 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:58.628 17:17:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:58.628 17:17:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:58.628 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:58.628 17:17:07 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:58.628 17:17:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:58.628 17:17:07 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:58.628 17:17:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.628 17:17:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:58.628 17:17:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.628 17:17:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:58.628 Found net devices under 0000:da:00.0: mlx_0_0 00:12:58.628 17:17:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.628 17:17:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:58.628 17:17:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.628 17:17:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:58.628 17:17:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.628 17:17:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:58.628 Found net devices under 0000:da:00.1: mlx_0_1 00:12:58.628 17:17:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.628 17:17:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:58.628 17:17:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:58.628 17:17:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:58.628 17:17:07 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:58.628 17:17:07 -- nvmf/common.sh@58 -- # uname 00:12:58.628 17:17:07 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:58.628 17:17:07 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:58.628 17:17:07 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:58.628 17:17:07 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:58.628 17:17:07 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:58.628 17:17:07 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:58.628 17:17:07 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:58.628 17:17:07 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:58.628 17:17:07 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:58.628 17:17:07 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:58.628 17:17:07 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:58.628 17:17:07 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:58.628 17:17:07 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:58.628 17:17:07 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:58.628 17:17:07 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:58.628 17:17:07 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:58.628 17:17:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:58.628 17:17:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.628 17:17:07 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:58.628 17:17:07 -- nvmf/common.sh@105 -- # continue 2 00:12:58.628 17:17:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:58.628 17:17:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.628 17:17:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.628 17:17:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:58.628 17:17:07 -- nvmf/common.sh@105 -- # continue 2 00:12:58.628 17:17:07 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:58.628 17:17:07 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:58.628 17:17:07 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:58.628 17:17:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:58.628 17:17:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:58.628 17:17:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:58.628 17:17:07 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:58.628 17:17:07 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:58.628 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:58.628 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:58.628 altname enp218s0f0np0 00:12:58.628 altname ens818f0np0 00:12:58.628 inet 192.168.100.8/24 scope global mlx_0_0 00:12:58.628 valid_lft forever preferred_lft forever 00:12:58.628 17:17:07 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:58.628 17:17:07 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:58.628 17:17:07 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:58.628 17:17:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:58.628 17:17:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:58.628 17:17:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:58.628 17:17:07 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:58.628 17:17:07 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:58.628 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:58.628 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:58.628 altname enp218s0f1np1 00:12:58.628 altname ens818f1np1 00:12:58.628 inet 192.168.100.9/24 scope global mlx_0_1 00:12:58.628 valid_lft forever preferred_lft forever 00:12:58.628 17:17:07 -- nvmf/common.sh@411 -- # return 0 00:12:58.628 17:17:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:58.628 17:17:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:58.628 17:17:07 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:58.628 17:17:07 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:58.628 17:17:07 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:58.628 17:17:07 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:58.628 17:17:07 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:58.628 17:17:07 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:58.628 17:17:07 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:58.889 17:17:07 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:58.889 17:17:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:58.889 17:17:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.889 17:17:07 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:58.889 17:17:07 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:58.889 17:17:07 -- nvmf/common.sh@105 -- # continue 2 00:12:58.889 17:17:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:58.889 17:17:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.889 17:17:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:58.889 17:17:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:58.889 17:17:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:58.889 17:17:07 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:58.889 17:17:07 -- nvmf/common.sh@105 -- # continue 2 00:12:58.889 17:17:07 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:58.889 17:17:07 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:58.889 17:17:07 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:58.889 17:17:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:58.889 17:17:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:58.889 17:17:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:58.889 17:17:07 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:58.889 17:17:07 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:58.889 17:17:07 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:58.889 17:17:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:58.889 17:17:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:58.889 17:17:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:58.889 17:17:07 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:58.889 192.168.100.9' 00:12:58.889 17:17:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:58.889 192.168.100.9' 00:12:58.889 17:17:07 -- nvmf/common.sh@446 -- # head -n 1 00:12:58.889 17:17:07 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:58.889 17:17:07 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:58.889 192.168.100.9' 00:12:58.889 17:17:07 -- nvmf/common.sh@447 -- # tail -n +2 00:12:58.889 17:17:07 -- nvmf/common.sh@447 -- # head -n 1 00:12:58.889 17:17:07 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:58.889 17:17:07 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:58.889 17:17:07 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:58.889 17:17:07 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:58.889 17:17:07 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:58.889 17:17:07 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:58.889 17:17:07 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:58.889 17:17:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:58.889 17:17:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:58.889 17:17:07 -- common/autotest_common.sh@10 -- # set +x 00:12:58.889 17:17:07 -- nvmf/common.sh@470 -- # nvmfpid=3012608 00:12:58.889 17:17:07 -- nvmf/common.sh@471 -- # waitforlisten 3012608 00:12:58.889 17:17:07 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:58.889 17:17:07 -- common/autotest_common.sh@817 -- # '[' -z 3012608 ']' 00:12:58.889 17:17:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.889 17:17:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:58.889 17:17:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.889 17:17:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:58.889 17:17:07 -- common/autotest_common.sh@10 -- # set +x 00:12:58.889 [2024-04-24 17:17:08.000428] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:58.889 [2024-04-24 17:17:08.000471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.889 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.889 [2024-04-24 17:17:08.055369] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.889 [2024-04-24 17:17:08.125630] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.889 [2024-04-24 17:17:08.125670] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.889 [2024-04-24 17:17:08.125677] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.889 [2024-04-24 17:17:08.125682] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.889 [2024-04-24 17:17:08.125687] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.889 [2024-04-24 17:17:08.125750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.889 [2024-04-24 17:17:08.125850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.889 [2024-04-24 17:17:08.125902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.889 [2024-04-24 17:17:08.125904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.827 17:17:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:59.827 17:17:08 -- common/autotest_common.sh@850 -- # return 0 00:12:59.827 17:17:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:59.827 17:17:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:59.827 17:17:08 -- common/autotest_common.sh@10 -- # set +x 00:12:59.827 17:17:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.827 17:17:08 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:59.827 [2024-04-24 17:17:09.007254] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1289f60/0x128e450) succeed. 00:12:59.827 [2024-04-24 17:17:09.017704] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x128b550/0x12cfae0) succeed. 00:13:00.085 17:17:09 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:00.344 17:17:09 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:00.344 17:17:09 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:00.344 17:17:09 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:00.344 17:17:09 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:00.609 17:17:09 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:00.609 17:17:09 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:00.869 17:17:09 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:00.869 17:17:09 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:00.869 17:17:10 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:01.127 17:17:10 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:01.127 17:17:10 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:01.385 17:17:10 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:01.385 17:17:10 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:01.644 17:17:10 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:01.644 17:17:10 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:01.644 17:17:10 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:01.903 17:17:11 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:01.903 17:17:11 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:02.162 17:17:11 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:02.162 17:17:11 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:02.162 17:17:11 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:02.421 [2024-04-24 17:17:11.511326] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:02.421 17:17:11 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:02.680 17:17:11 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:02.680 17:17:11 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:03.624 17:17:12 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:03.624 17:17:12 -- common/autotest_common.sh@1184 -- # local i=0 00:13:03.624 17:17:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.624 17:17:12 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:13:03.624 17:17:12 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:13:03.624 17:17:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:06.160 17:17:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:06.160 17:17:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:06.160 17:17:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.160 17:17:14 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:13:06.160 17:17:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.160 17:17:14 -- common/autotest_common.sh@1194 -- # return 0 00:13:06.160 17:17:14 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:06.160 [global] 00:13:06.160 thread=1 00:13:06.160 invalidate=1 00:13:06.160 rw=write 00:13:06.160 time_based=1 00:13:06.160 runtime=1 00:13:06.160 ioengine=libaio 00:13:06.160 direct=1 00:13:06.160 bs=4096 00:13:06.160 iodepth=1 00:13:06.160 norandommap=0 00:13:06.160 numjobs=1 00:13:06.160 00:13:06.160 verify_dump=1 00:13:06.160 verify_backlog=512 00:13:06.160 verify_state_save=0 00:13:06.160 do_verify=1 00:13:06.160 verify=crc32c-intel 00:13:06.160 [job0] 00:13:06.160 filename=/dev/nvme0n1 00:13:06.160 [job1] 00:13:06.160 filename=/dev/nvme0n2 00:13:06.160 [job2] 00:13:06.160 filename=/dev/nvme0n3 00:13:06.160 [job3] 00:13:06.160 filename=/dev/nvme0n4 00:13:06.160 Could not set queue depth (nvme0n1) 00:13:06.160 Could not set queue depth (nvme0n2) 00:13:06.160 Could not set queue depth (nvme0n3) 00:13:06.160 Could not set queue depth (nvme0n4) 00:13:06.160 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.160 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.160 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.160 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.160 fio-3.35 00:13:06.160 Starting 4 threads 00:13:07.537 00:13:07.537 job0: (groupid=0, jobs=1): err= 0: pid=3012896: Wed Apr 24 17:17:16 2024 00:13:07.537 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:13:07.537 slat (nsec): min=6064, max=22458, avg=6885.88, stdev=634.58 00:13:07.537 clat (usec): min=61, max=177, avg=87.44, stdev=14.17 00:13:07.537 lat (usec): min=68, max=183, avg=94.32, stdev=14.28 00:13:07.537 clat percentiles (usec): 00:13:07.537 | 1.00th=[ 72], 5.00th=[ 75], 10.00th=[ 77], 20.00th=[ 79], 00:13:07.537 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 86], 00:13:07.537 | 70.00th=[ 88], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 122], 00:13:07.537 | 99.00th=[ 135], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 174], 00:13:07.537 | 99.99th=[ 178] 00:13:07.537 write: IOPS=5411, BW=21.1MiB/s (22.2MB/s)(21.2MiB/1001msec); 0 zone resets 00:13:07.537 slat (nsec): min=7872, max=41497, avg=8724.47, stdev=974.59 00:13:07.537 clat (usec): min=57, max=268, avg=83.12, stdev=14.59 00:13:07.537 lat (usec): min=66, max=277, avg=91.84, stdev=14.74 00:13:07.537 clat percentiles (usec): 00:13:07.537 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:13:07.537 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 82], 00:13:07.537 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 120], 00:13:07.537 | 99.00th=[ 137], 99.50th=[ 157], 99.90th=[ 172], 99.95th=[ 178], 00:13:07.537 | 99.99th=[ 269] 00:13:07.537 bw ( KiB/s): min=23272, max=23272, per=26.80%, avg=23272.00, stdev= 0.00, samples=1 00:13:07.537 iops : min= 5818, max= 5818, avg=5818.00, stdev= 0.00, samples=1 00:13:07.537 lat (usec) : 100=89.99%, 250=10.00%, 500=0.01% 00:13:07.537 cpu : usr=6.70%, sys=10.10%, ctx=10537, majf=0, minf=1 00:13:07.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.537 issued rwts: total=5120,5417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:07.538 job1: (groupid=0, jobs=1): err= 0: pid=3012902: Wed Apr 24 17:17:16 2024 00:13:07.538 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:13:07.538 slat (nsec): min=6252, max=27144, avg=6952.41, stdev=719.45 00:13:07.538 clat (usec): min=55, max=109, avg=78.44, stdev= 5.95 00:13:07.538 lat (usec): min=70, max=116, avg=85.39, stdev= 5.96 00:13:07.538 clat percentiles (usec): 00:13:07.538 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 74], 00:13:07.538 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 80], 00:13:07.538 | 70.00th=[ 81], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 90], 00:13:07.538 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 104], 99.95th=[ 106], 00:13:07.538 | 99.99th=[ 111] 00:13:07.538 write: IOPS=5956, BW=23.3MiB/s (24.4MB/s)(23.3MiB/1001msec); 0 zone resets 00:13:07.538 slat (nsec): min=7955, max=45796, avg=8729.07, stdev=823.84 00:13:07.538 clat (usec): min=56, max=108, avg=74.82, stdev= 5.93 00:13:07.538 lat (usec): min=64, max=151, avg=83.55, stdev= 6.03 00:13:07.538 clat percentiles (usec): 00:13:07.538 | 1.00th=[ 64], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:13:07.538 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 76], 00:13:07.538 | 70.00th=[ 78], 80.00th=[ 80], 90.00th=[ 83], 95.00th=[ 86], 00:13:07.538 | 99.00th=[ 93], 99.50th=[ 97], 99.90th=[ 104], 99.95th=[ 106], 00:13:07.538 | 99.99th=[ 109] 00:13:07.538 bw ( KiB/s): min=24576, max=24576, per=28.30%, avg=24576.00, stdev= 0.00, samples=1 00:13:07.538 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:13:07.538 lat (usec) : 100=99.72%, 250=0.28% 00:13:07.538 cpu : usr=6.50%, sys=11.90%, ctx=11594, majf=0, minf=1 00:13:07.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.538 issued rwts: total=5632,5962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:07.538 job2: (groupid=0, jobs=1): err= 0: pid=3012904: Wed Apr 24 17:17:16 2024 00:13:07.538 read: IOPS=4757, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1001msec) 00:13:07.538 slat (nsec): min=6603, max=15855, avg=7317.77, stdev=584.55 00:13:07.538 clat (usec): min=74, max=168, avg=93.45, stdev=12.41 00:13:07.538 lat (usec): min=82, max=175, avg=100.77, stdev=12.43 00:13:07.538 clat percentiles (usec): 00:13:07.538 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 83], 20.00th=[ 85], 00:13:07.538 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 92], 00:13:07.538 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 111], 95.00th=[ 123], 00:13:07.538 | 99.00th=[ 135], 99.50th=[ 149], 99.90th=[ 161], 99.95th=[ 165], 00:13:07.538 | 99.99th=[ 169] 00:13:07.538 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:13:07.538 slat (nsec): min=8224, max=34904, avg=9080.03, stdev=868.65 00:13:07.538 clat (usec): min=67, max=288, avg=88.92, stdev=13.45 00:13:07.538 lat (usec): min=76, max=303, avg=98.00, stdev=13.52 00:13:07.538 clat percentiles (usec): 00:13:07.538 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 81], 00:13:07.538 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 87], 00:13:07.538 | 70.00th=[ 90], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 121], 00:13:07.538 | 99.00th=[ 133], 99.50th=[ 147], 99.90th=[ 167], 99.95th=[ 174], 00:13:07.538 | 99.99th=[ 289] 00:13:07.538 bw ( KiB/s): min=20480, max=20480, per=23.59%, avg=20480.00, stdev= 0.00, samples=1 00:13:07.538 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:13:07.538 lat (usec) : 100=84.47%, 250=15.52%, 500=0.01% 00:13:07.538 cpu : usr=5.10%, sys=11.20%, ctx=9882, majf=0, minf=1 00:13:07.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.538 issued rwts: total=4762,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:07.538 job3: (groupid=0, jobs=1): err= 0: pid=3012906: Wed Apr 24 17:17:16 2024 00:13:07.538 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:13:07.538 slat (nsec): min=6509, max=17435, avg=7174.80, stdev=623.37 00:13:07.538 clat (usec): min=72, max=129, avg=88.89, stdev= 7.05 00:13:07.538 lat (usec): min=79, max=136, avg=96.07, stdev= 7.09 00:13:07.538 clat percentiles (usec): 00:13:07.538 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:13:07.538 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 90], 00:13:07.538 | 70.00th=[ 92], 80.00th=[ 94], 90.00th=[ 99], 95.00th=[ 102], 00:13:07.538 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 120], 99.95th=[ 122], 00:13:07.538 | 99.99th=[ 130] 00:13:07.538 write: IOPS=5224, BW=20.4MiB/s (21.4MB/s)(20.4MiB/1001msec); 0 zone resets 00:13:07.538 slat (nsec): min=8193, max=35606, avg=9090.07, stdev=988.45 00:13:07.538 clat (usec): min=67, max=120, avg=84.40, stdev= 7.12 00:13:07.538 lat (usec): min=76, max=148, avg=93.49, stdev= 7.24 00:13:07.538 clat percentiles (usec): 00:13:07.538 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 79], 00:13:07.538 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 85], 00:13:07.538 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 94], 95.00th=[ 98], 00:13:07.538 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 117], 99.95th=[ 118], 00:13:07.538 | 99.99th=[ 121] 00:13:07.538 bw ( KiB/s): min=20752, max=20752, per=23.90%, avg=20752.00, stdev= 0.00, samples=1 00:13:07.538 iops : min= 5188, max= 5188, avg=5188.00, stdev= 0.00, samples=1 00:13:07.538 lat (usec) : 100=94.47%, 250=5.53% 00:13:07.538 cpu : usr=5.60%, sys=11.20%, ctx=10350, majf=0, minf=2 00:13:07.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.538 issued rwts: total=5120,5230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:07.538 00:13:07.538 Run status group 0 (all jobs): 00:13:07.538 READ: bw=80.5MiB/s (84.4MB/s), 18.6MiB/s-22.0MiB/s (19.5MB/s-23.0MB/s), io=80.6MiB (84.5MB), run=1001-1001msec 00:13:07.538 WRITE: bw=84.8MiB/s (88.9MB/s), 20.0MiB/s-23.3MiB/s (20.9MB/s-24.4MB/s), io=84.9MiB (89.0MB), run=1001-1001msec 00:13:07.538 00:13:07.538 Disk stats (read/write): 00:13:07.538 nvme0n1: ios=4658/4819, merge=0/0, ticks=366/346, in_queue=712, util=86.97% 00:13:07.538 nvme0n2: ios=4830/5120, merge=0/0, ticks=353/343, in_queue=696, util=87.32% 00:13:07.538 nvme0n3: ios=4180/4608, merge=0/0, ticks=354/350, in_queue=704, util=89.22% 00:13:07.538 nvme0n4: ios=4252/4608, merge=0/0, ticks=356/359, in_queue=715, util=89.78% 00:13:07.538 17:17:16 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:07.538 [global] 00:13:07.538 thread=1 00:13:07.538 invalidate=1 00:13:07.538 rw=randwrite 00:13:07.538 time_based=1 00:13:07.538 runtime=1 00:13:07.538 ioengine=libaio 00:13:07.538 direct=1 00:13:07.538 bs=4096 00:13:07.538 iodepth=1 00:13:07.538 norandommap=0 00:13:07.538 numjobs=1 00:13:07.538 00:13:07.538 verify_dump=1 00:13:07.538 verify_backlog=512 00:13:07.538 verify_state_save=0 00:13:07.538 do_verify=1 00:13:07.538 verify=crc32c-intel 00:13:07.538 [job0] 00:13:07.538 filename=/dev/nvme0n1 00:13:07.538 [job1] 00:13:07.538 filename=/dev/nvme0n2 00:13:07.538 [job2] 00:13:07.538 filename=/dev/nvme0n3 00:13:07.538 [job3] 00:13:07.538 filename=/dev/nvme0n4 00:13:07.538 Could not set queue depth (nvme0n1) 00:13:07.538 Could not set queue depth (nvme0n2) 00:13:07.538 Could not set queue depth (nvme0n3) 00:13:07.538 Could not set queue depth (nvme0n4) 00:13:07.538 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:07.538 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:07.538 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:07.538 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:07.538 fio-3.35 00:13:07.538 Starting 4 threads 00:13:08.917 00:13:08.917 job0: (groupid=0, jobs=1): err= 0: pid=3013061: Wed Apr 24 17:17:17 2024 00:13:08.917 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:13:08.917 slat (nsec): min=5549, max=29712, avg=8533.12, stdev=2553.96 00:13:08.917 clat (usec): min=58, max=216, avg=92.19, stdev=21.27 00:13:08.917 lat (usec): min=72, max=234, avg=100.73, stdev=20.86 00:13:08.917 clat percentiles (usec): 00:13:08.917 | 1.00th=[ 69], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 77], 00:13:08.917 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 88], 00:13:08.917 | 70.00th=[ 93], 80.00th=[ 115], 90.00th=[ 124], 95.00th=[ 135], 00:13:08.917 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 190], 99.95th=[ 192], 00:13:08.917 | 99.99th=[ 217] 00:13:08.917 write: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(19.8MiB/1001msec); 0 zone resets 00:13:08.917 slat (nsec): min=7830, max=33010, avg=10453.41, stdev=2697.62 00:13:08.917 clat (usec): min=59, max=185, avg=90.24, stdev=21.55 00:13:08.917 lat (usec): min=70, max=204, avg=100.69, stdev=21.35 00:13:08.917 clat percentiles (usec): 00:13:08.917 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 75], 00:13:08.917 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 82], 60.00th=[ 86], 00:13:08.917 | 70.00th=[ 94], 80.00th=[ 111], 90.00th=[ 120], 95.00th=[ 137], 00:13:08.917 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 180], 00:13:08.917 | 99.99th=[ 186] 00:13:08.917 bw ( KiB/s): min=20480, max=20480, per=28.25%, avg=20480.00, stdev= 0.00, samples=1 00:13:08.917 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:13:08.917 lat (usec) : 100=74.37%, 250=25.63% 00:13:08.917 cpu : usr=5.60%, sys=11.10%, ctx=9678, majf=0, minf=1 00:13:08.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:08.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.917 issued rwts: total=4608,5070,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:08.917 job1: (groupid=0, jobs=1): err= 0: pid=3013062: Wed Apr 24 17:17:17 2024 00:13:08.917 read: IOPS=3603, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec) 00:13:08.917 slat (nsec): min=6299, max=25637, avg=7267.78, stdev=1006.82 00:13:08.917 clat (usec): min=66, max=196, avg=122.16, stdev=16.62 00:13:08.917 lat (usec): min=73, max=206, avg=129.42, stdev=16.77 00:13:08.917 clat percentiles (usec): 00:13:08.917 | 1.00th=[ 79], 5.00th=[ 96], 10.00th=[ 108], 20.00th=[ 114], 00:13:08.917 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 124], 00:13:08.917 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 139], 95.00th=[ 155], 00:13:08.917 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 196], 99.95th=[ 196], 00:13:08.917 | 99.99th=[ 198] 00:13:08.917 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:13:08.917 slat (nsec): min=7791, max=37523, avg=9320.18, stdev=1685.65 00:13:08.917 clat (usec): min=59, max=198, avg=116.82, stdev=22.30 00:13:08.917 lat (usec): min=67, max=212, avg=126.14, stdev=23.14 00:13:08.917 clat percentiles (usec): 00:13:08.917 | 1.00th=[ 70], 5.00th=[ 79], 10.00th=[ 97], 20.00th=[ 104], 00:13:08.917 | 30.00th=[ 109], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 117], 00:13:08.917 | 70.00th=[ 121], 80.00th=[ 130], 90.00th=[ 149], 95.00th=[ 163], 00:13:08.917 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 196], 99.95th=[ 198], 00:13:08.917 | 99.99th=[ 200] 00:13:08.917 bw ( KiB/s): min=16384, max=16384, per=22.60%, avg=16384.00, stdev= 0.00, samples=1 00:13:08.917 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:13:08.917 lat (usec) : 100=10.29%, 250=89.71% 00:13:08.917 cpu : usr=4.10%, sys=9.20%, ctx=7703, majf=0, minf=1 00:13:08.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:08.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.917 issued rwts: total=3607,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:08.917 job2: (groupid=0, jobs=1): err= 0: pid=3013063: Wed Apr 24 17:17:17 2024 00:13:08.917 read: IOPS=4082, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1001msec) 00:13:08.917 slat (nsec): min=6469, max=28582, avg=7556.43, stdev=1230.98 00:13:08.917 clat (usec): min=71, max=177, avg=115.53, stdev=18.04 00:13:08.917 lat (usec): min=78, max=184, avg=123.09, stdev=17.56 00:13:08.917 clat percentiles (usec): 00:13:08.917 | 1.00th=[ 76], 5.00th=[ 81], 10.00th=[ 85], 20.00th=[ 104], 00:13:08.917 | 30.00th=[ 113], 40.00th=[ 117], 50.00th=[ 120], 60.00th=[ 123], 00:13:08.917 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 141], 00:13:08.917 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 172], 99.95th=[ 174], 00:13:08.917 | 99.99th=[ 178] 00:13:08.917 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:13:08.917 slat (nsec): min=7776, max=36825, avg=9231.68, stdev=1423.80 00:13:08.917 clat (usec): min=66, max=165, avg=107.83, stdev=16.12 00:13:08.917 lat (usec): min=75, max=174, avg=117.06, stdev=15.67 00:13:08.917 clat percentiles (usec): 00:13:08.917 | 1.00th=[ 74], 5.00th=[ 78], 10.00th=[ 82], 20.00th=[ 97], 00:13:08.917 | 30.00th=[ 104], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 113], 00:13:08.917 | 70.00th=[ 116], 80.00th=[ 119], 90.00th=[ 125], 95.00th=[ 135], 00:13:08.917 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 161], 99.95th=[ 161], 00:13:08.917 | 99.99th=[ 165] 00:13:08.917 bw ( KiB/s): min=16384, max=16384, per=22.60%, avg=16384.00, stdev= 0.00, samples=1 00:13:08.917 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:13:08.917 lat (usec) : 100=20.79%, 250=79.21% 00:13:08.917 cpu : usr=4.90%, sys=9.10%, ctx=8183, majf=0, minf=2 00:13:08.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:08.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.917 issued rwts: total=4087,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:08.917 job3: (groupid=0, jobs=1): err= 0: pid=3013064: Wed Apr 24 17:17:17 2024 00:13:08.917 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:13:08.917 slat (nsec): min=6137, max=27532, avg=7246.22, stdev=810.80 00:13:08.917 clat (usec): min=68, max=200, avg=98.89, stdev=20.14 00:13:08.917 lat (usec): min=78, max=207, avg=106.13, stdev=20.30 00:13:08.917 clat percentiles (usec): 00:13:08.917 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:13:08.917 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 91], 60.00th=[ 94], 00:13:08.917 | 70.00th=[ 101], 80.00th=[ 119], 90.00th=[ 128], 95.00th=[ 141], 00:13:08.917 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 182], 99.95th=[ 182], 00:13:08.917 | 99.99th=[ 200] 00:13:08.918 write: IOPS=4872, BW=19.0MiB/s (20.0MB/s)(19.1MiB/1001msec); 0 zone resets 00:13:08.918 slat (nsec): min=7761, max=34395, avg=8816.82, stdev=958.93 00:13:08.918 clat (usec): min=67, max=306, avg=92.20, stdev=17.50 00:13:08.918 lat (usec): min=77, max=314, avg=101.01, stdev=17.70 00:13:08.918 clat percentiles (usec): 00:13:08.918 | 1.00th=[ 72], 5.00th=[ 75], 10.00th=[ 77], 20.00th=[ 80], 00:13:08.918 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 89], 00:13:08.918 | 70.00th=[ 95], 80.00th=[ 109], 90.00th=[ 117], 95.00th=[ 126], 00:13:08.918 | 99.00th=[ 149], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 174], 00:13:08.918 | 99.99th=[ 306] 00:13:08.918 bw ( KiB/s): min=20480, max=20480, per=28.25%, avg=20480.00, stdev= 0.00, samples=1 00:13:08.918 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:13:08.918 lat (usec) : 100=71.62%, 250=28.37%, 500=0.01% 00:13:08.918 cpu : usr=5.40%, sys=10.10%, ctx=9485, majf=0, minf=1 00:13:08.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:08.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.918 issued rwts: total=4608,4877,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:08.918 00:13:08.918 Run status group 0 (all jobs): 00:13:08.918 READ: bw=66.0MiB/s (69.2MB/s), 14.1MiB/s-18.0MiB/s (14.8MB/s-18.9MB/s), io=66.1MiB (69.3MB), run=1001-1001msec 00:13:08.918 WRITE: bw=70.8MiB/s (74.2MB/s), 16.0MiB/s-19.8MiB/s (16.8MB/s-20.7MB/s), io=70.9MiB (74.3MB), run=1001-1001msec 00:13:08.918 00:13:08.918 Disk stats (read/write): 00:13:08.918 nvme0n1: ios=4146/4596, merge=0/0, ticks=342/347, in_queue=689, util=87.27% 00:13:08.918 nvme0n2: ios=3226/3584, merge=0/0, ticks=372/384, in_queue=756, util=87.44% 00:13:08.918 nvme0n3: ios=3226/3584, merge=0/0, ticks=369/379, in_queue=748, util=89.25% 00:13:08.918 nvme0n4: ios=4096/4392, merge=0/0, ticks=353/372, in_queue=725, util=89.80% 00:13:08.918 17:17:17 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:08.918 [global] 00:13:08.918 thread=1 00:13:08.918 invalidate=1 00:13:08.918 rw=write 00:13:08.918 time_based=1 00:13:08.918 runtime=1 00:13:08.918 ioengine=libaio 00:13:08.918 direct=1 00:13:08.918 bs=4096 00:13:08.918 iodepth=128 00:13:08.918 norandommap=0 00:13:08.918 numjobs=1 00:13:08.918 00:13:08.918 verify_dump=1 00:13:08.918 verify_backlog=512 00:13:08.918 verify_state_save=0 00:13:08.918 do_verify=1 00:13:08.918 verify=crc32c-intel 00:13:08.918 [job0] 00:13:08.918 filename=/dev/nvme0n1 00:13:08.918 [job1] 00:13:08.918 filename=/dev/nvme0n2 00:13:08.918 [job2] 00:13:08.918 filename=/dev/nvme0n3 00:13:08.918 [job3] 00:13:08.918 filename=/dev/nvme0n4 00:13:08.918 Could not set queue depth (nvme0n1) 00:13:08.918 Could not set queue depth (nvme0n2) 00:13:08.918 Could not set queue depth (nvme0n3) 00:13:08.918 Could not set queue depth (nvme0n4) 00:13:09.177 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:09.177 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:09.177 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:09.177 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:09.177 fio-3.35 00:13:09.177 Starting 4 threads 00:13:10.578 00:13:10.578 job0: (groupid=0, jobs=1): err= 0: pid=3013222: Wed Apr 24 17:17:19 2024 00:13:10.578 read: IOPS=6634, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:13:10.578 slat (nsec): min=1358, max=1453.6k, avg=76007.92, stdev=206458.37 00:13:10.578 clat (usec): min=654, max=11628, avg=9775.17, stdev=1939.62 00:13:10.578 lat (usec): min=1130, max=11921, avg=9851.18, stdev=1945.45 00:13:10.578 clat percentiles (usec): 00:13:10.578 | 1.00th=[ 4228], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 7111], 00:13:10.578 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:13:10.578 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11076], 95.00th=[11207], 00:13:10.578 | 99.00th=[11207], 99.50th=[11207], 99.90th=[11338], 99.95th=[11600], 00:13:10.578 | 99.99th=[11600] 00:13:10.578 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:13:10.578 slat (usec): min=2, max=1056, avg=71.45, stdev=192.73 00:13:10.578 clat (usec): min=4790, max=10955, avg=9271.70, stdev=1625.55 00:13:10.578 lat (usec): min=4803, max=11343, avg=9343.14, stdev=1629.09 00:13:10.578 clat percentiles (usec): 00:13:10.578 | 1.00th=[ 5800], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6915], 00:13:10.578 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10028], 60.00th=[10159], 00:13:10.578 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10421], 95.00th=[10552], 00:13:10.578 | 99.00th=[10814], 99.50th=[10945], 99.90th=[10945], 99.95th=[10945], 00:13:10.578 | 99.99th=[10945] 00:13:10.578 bw ( KiB/s): min=24576, max=28672, per=28.68%, avg=26624.00, stdev=2896.31, samples=2 00:13:10.578 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:13:10.578 lat (usec) : 750=0.01% 00:13:10.578 lat (msec) : 2=0.19%, 4=0.29%, 10=33.15%, 20=66.36% 00:13:10.578 cpu : usr=2.80%, sys=4.90%, ctx=1841, majf=0, minf=1 00:13:10.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:10.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:10.578 issued rwts: total=6648,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:10.578 job1: (groupid=0, jobs=1): err= 0: pid=3013223: Wed Apr 24 17:17:19 2024 00:13:10.578 read: IOPS=6587, BW=25.7MiB/s (27.0MB/s)(25.8MiB/1001msec) 00:13:10.578 slat (nsec): min=1384, max=1449.6k, avg=76394.03, stdev=253451.40 00:13:10.578 clat (usec): min=601, max=11629, avg=9823.63, stdev=1920.78 00:13:10.578 lat (usec): min=607, max=11632, avg=9900.03, stdev=1921.17 00:13:10.578 clat percentiles (usec): 00:13:10.578 | 1.00th=[ 3720], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 7046], 00:13:10.578 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:13:10.578 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11076], 95.00th=[11207], 00:13:10.578 | 99.00th=[11207], 99.50th=[11338], 99.90th=[11338], 99.95th=[11338], 00:13:10.578 | 99.99th=[11600] 00:13:10.578 write: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec); 0 zone resets 00:13:10.578 slat (nsec): min=1948, max=1792.8k, avg=71798.18, stdev=238906.45 00:13:10.578 clat (usec): min=5676, max=11455, avg=9291.84, stdev=1614.14 00:13:10.578 lat (usec): min=5679, max=11458, avg=9363.64, stdev=1611.58 00:13:10.578 clat percentiles (usec): 00:13:10.578 | 1.00th=[ 5866], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6849], 00:13:10.578 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10159], 00:13:10.578 | 70.00th=[10290], 80.00th=[10290], 90.00th=[10421], 95.00th=[10683], 00:13:10.578 | 99.00th=[10814], 99.50th=[10945], 99.90th=[10945], 99.95th=[11076], 00:13:10.578 | 99.99th=[11469] 00:13:10.578 bw ( KiB/s): min=24576, max=24576, per=26.48%, avg=24576.00, stdev= 0.00, samples=1 00:13:10.578 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:13:10.578 lat (usec) : 750=0.03% 00:13:10.578 lat (msec) : 2=0.13%, 4=0.35%, 10=32.29%, 20=67.19% 00:13:10.578 cpu : usr=2.50%, sys=5.00%, ctx=2121, majf=0, minf=1 00:13:10.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:10.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:10.578 issued rwts: total=6594,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:10.578 job2: (groupid=0, jobs=1): err= 0: pid=3013224: Wed Apr 24 17:17:19 2024 00:13:10.578 read: IOPS=5423, BW=21.2MiB/s (22.2MB/s)(21.2MiB/1003msec) 00:13:10.578 slat (nsec): min=1425, max=3113.1k, avg=90675.85, stdev=327252.06 00:13:10.578 clat (usec): min=1546, max=16260, avg=11600.70, stdev=1129.62 00:13:10.578 lat (usec): min=3454, max=16883, avg=11691.38, stdev=1151.55 00:13:10.578 clat percentiles (usec): 00:13:10.578 | 1.00th=[ 9241], 5.00th=[10552], 10.00th=[10814], 20.00th=[11076], 00:13:10.578 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:13:10.578 | 70.00th=[11600], 80.00th=[12125], 90.00th=[13173], 95.00th=[13698], 00:13:10.578 | 99.00th=[14484], 99.50th=[14877], 99.90th=[15664], 99.95th=[15795], 00:13:10.578 | 99.99th=[16319] 00:13:10.578 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:13:10.578 slat (usec): min=2, max=2946, avg=87.14, stdev=299.52 00:13:10.578 clat (usec): min=8420, max=16051, avg=11327.38, stdev=950.44 00:13:10.578 lat (usec): min=8431, max=16059, avg=11414.52, stdev=975.64 00:13:10.578 clat percentiles (usec): 00:13:10.578 | 1.00th=[ 9896], 5.00th=[10290], 10.00th=[10552], 20.00th=[10683], 00:13:10.578 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:13:10.578 | 70.00th=[11338], 80.00th=[11600], 90.00th=[13042], 95.00th=[13566], 00:13:10.578 | 99.00th=[14222], 99.50th=[14353], 99.90th=[15664], 99.95th=[15795], 00:13:10.578 | 99.99th=[16057] 00:13:10.578 bw ( KiB/s): min=20960, max=24096, per=24.27%, avg=22528.00, stdev=2217.49, samples=2 00:13:10.578 iops : min= 5240, max= 6024, avg=5632.00, stdev=554.37, samples=2 00:13:10.578 lat (msec) : 2=0.01%, 4=0.05%, 10=1.55%, 20=98.38% 00:13:10.578 cpu : usr=2.79%, sys=3.99%, ctx=965, majf=0, minf=1 00:13:10.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:10.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:10.578 issued rwts: total=5440,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:10.578 job3: (groupid=0, jobs=1): err= 0: pid=3013225: Wed Apr 24 17:17:19 2024 00:13:10.578 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:13:10.578 slat (nsec): min=1466, max=3462.4k, avg=118899.95, stdev=429155.62 00:13:10.578 clat (usec): min=11771, max=19113, avg=15283.39, stdev=1133.80 00:13:10.578 lat (usec): min=13092, max=19122, avg=15402.29, stdev=1126.01 00:13:10.578 clat percentiles (usec): 00:13:10.578 | 1.00th=[13173], 5.00th=[14091], 10.00th=[14353], 20.00th=[14484], 00:13:10.578 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15139], 00:13:10.578 | 70.00th=[15401], 80.00th=[15664], 90.00th=[17433], 95.00th=[17957], 00:13:10.578 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:13:10.578 | 99.99th=[19006] 00:13:10.578 write: IOPS=4319, BW=16.9MiB/s (17.7MB/s)(16.9MiB/1003msec); 0 zone resets 00:13:10.578 slat (usec): min=2, max=3166, avg=115.17, stdev=401.30 00:13:10.578 clat (usec): min=2471, max=19691, avg=14802.01, stdev=1651.34 00:13:10.578 lat (usec): min=2476, max=19694, avg=14917.18, stdev=1649.38 00:13:10.578 clat percentiles (usec): 00:13:10.578 | 1.00th=[ 7373], 5.00th=[13173], 10.00th=[13698], 20.00th=[14091], 00:13:10.578 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:13:10.578 | 70.00th=[15008], 80.00th=[15270], 90.00th=[17171], 95.00th=[17957], 00:13:10.578 | 99.00th=[18744], 99.50th=[19268], 99.90th=[19792], 99.95th=[19792], 00:13:10.578 | 99.99th=[19792] 00:13:10.578 bw ( KiB/s): min=16312, max=17328, per=18.12%, avg=16820.00, stdev=718.42, samples=2 00:13:10.578 iops : min= 4078, max= 4332, avg=4205.00, stdev=179.61, samples=2 00:13:10.578 lat (msec) : 4=0.20%, 10=0.38%, 20=99.42% 00:13:10.578 cpu : usr=2.10%, sys=3.49%, ctx=838, majf=0, minf=1 00:13:10.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:10.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:10.578 issued rwts: total=4096,4332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:10.578 00:13:10.578 Run status group 0 (all jobs): 00:13:10.578 READ: bw=88.7MiB/s (93.0MB/s), 16.0MiB/s-25.9MiB/s (16.7MB/s-27.2MB/s), io=89.0MiB (93.3MB), run=1001-1003msec 00:13:10.578 WRITE: bw=90.6MiB/s (95.1MB/s), 16.9MiB/s-26.0MiB/s (17.7MB/s-27.2MB/s), io=90.9MiB (95.3MB), run=1001-1003msec 00:13:10.578 00:13:10.578 Disk stats (read/write): 00:13:10.578 nvme0n1: ios=5170/5452, merge=0/0, ticks=13911/13449, in_queue=27360, util=87.07% 00:13:10.578 nvme0n2: ios=5120/5390, merge=0/0, ticks=13961/13359, in_queue=27320, util=87.34% 00:13:10.578 nvme0n3: ios=4608/5110, merge=0/0, ticks=17087/18405, in_queue=35492, util=89.23% 00:13:10.578 nvme0n4: ios=3584/3813, merge=0/0, ticks=13323/13631, in_queue=26954, util=89.79% 00:13:10.578 17:17:19 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:10.578 [global] 00:13:10.578 thread=1 00:13:10.578 invalidate=1 00:13:10.578 rw=randwrite 00:13:10.578 time_based=1 00:13:10.578 runtime=1 00:13:10.578 ioengine=libaio 00:13:10.578 direct=1 00:13:10.578 bs=4096 00:13:10.578 iodepth=128 00:13:10.578 norandommap=0 00:13:10.578 numjobs=1 00:13:10.578 00:13:10.578 verify_dump=1 00:13:10.578 verify_backlog=512 00:13:10.578 verify_state_save=0 00:13:10.578 do_verify=1 00:13:10.578 verify=crc32c-intel 00:13:10.578 [job0] 00:13:10.578 filename=/dev/nvme0n1 00:13:10.578 [job1] 00:13:10.578 filename=/dev/nvme0n2 00:13:10.578 [job2] 00:13:10.579 filename=/dev/nvme0n3 00:13:10.579 [job3] 00:13:10.579 filename=/dev/nvme0n4 00:13:10.579 Could not set queue depth (nvme0n1) 00:13:10.579 Could not set queue depth (nvme0n2) 00:13:10.579 Could not set queue depth (nvme0n3) 00:13:10.579 Could not set queue depth (nvme0n4) 00:13:10.841 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:10.841 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:10.841 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:10.841 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:10.841 fio-3.35 00:13:10.841 Starting 4 threads 00:13:12.213 00:13:12.213 job0: (groupid=0, jobs=1): err= 0: pid=3013382: Wed Apr 24 17:17:21 2024 00:13:12.213 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:13:12.213 slat (nsec): min=1180, max=1918.0k, avg=86519.63, stdev=243516.15 00:13:12.213 clat (usec): min=9004, max=19365, avg=11179.80, stdev=2007.73 00:13:12.213 lat (usec): min=9326, max=19372, avg=11266.32, stdev=2008.40 00:13:12.213 clat percentiles (usec): 00:13:12.213 | 1.00th=[ 9634], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10421], 00:13:12.213 | 30.00th=[10552], 40.00th=[10552], 50.00th=[10683], 60.00th=[10683], 00:13:12.213 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11207], 95.00th=[17957], 00:13:12.213 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:13:12.213 | 99.99th=[19268] 00:13:12.213 write: IOPS=5966, BW=23.3MiB/s (24.4MB/s)(23.4MiB/1003msec); 0 zone resets 00:13:12.213 slat (nsec): min=1804, max=3677.2k, avg=82775.98, stdev=230005.54 00:13:12.213 clat (usec): min=2046, max=18027, avg=10671.09, stdev=1505.65 00:13:12.213 lat (usec): min=2057, max=18030, avg=10753.86, stdev=1498.98 00:13:12.213 clat percentiles (usec): 00:13:12.213 | 1.00th=[ 7308], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:13:12.213 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10552], 00:13:12.213 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[12387], 00:13:12.213 | 99.00th=[17695], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:13:12.213 | 99.99th=[17957] 00:13:12.213 bw ( KiB/s): min=23112, max=23744, per=25.68%, avg=23428.00, stdev=446.89, samples=2 00:13:12.213 iops : min= 5778, max= 5936, avg=5857.00, stdev=111.72, samples=2 00:13:12.213 lat (msec) : 4=0.22%, 10=9.52%, 20=90.26% 00:13:12.213 cpu : usr=2.00%, sys=3.29%, ctx=1891, majf=0, minf=1 00:13:12.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:12.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:12.213 issued rwts: total=5632,5984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:12.213 job1: (groupid=0, jobs=1): err= 0: pid=3013383: Wed Apr 24 17:17:21 2024 00:13:12.213 read: IOPS=6287, BW=24.6MiB/s (25.8MB/s)(24.6MiB/1003msec) 00:13:12.213 slat (nsec): min=1194, max=2078.4k, avg=77073.26, stdev=241966.22 00:13:12.213 clat (usec): min=1564, max=14381, avg=9894.37, stdev=1952.12 00:13:12.213 lat (usec): min=2282, max=14389, avg=9971.44, stdev=1951.90 00:13:12.213 clat percentiles (usec): 00:13:12.213 | 1.00th=[ 5014], 5.00th=[ 5276], 10.00th=[ 5669], 20.00th=[ 9896], 00:13:12.213 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10683], 00:13:12.213 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11076], 95.00th=[11207], 00:13:12.213 | 99.00th=[12780], 99.50th=[13829], 99.90th=[14353], 99.95th=[14353], 00:13:12.213 | 99.99th=[14353] 00:13:12.213 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:13:12.213 slat (nsec): min=1781, max=1557.8k, avg=74630.97, stdev=226369.17 00:13:12.213 clat (usec): min=4735, max=14166, avg=9678.13, stdev=2000.56 00:13:12.213 lat (usec): min=4740, max=14173, avg=9752.76, stdev=2004.00 00:13:12.213 clat percentiles (usec): 00:13:12.213 | 1.00th=[ 4817], 5.00th=[ 4948], 10.00th=[ 5342], 20.00th=[ 9765], 00:13:12.213 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:13:12.213 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10683], 95.00th=[11863], 00:13:12.213 | 99.00th=[12911], 99.50th=[13304], 99.90th=[14091], 99.95th=[14091], 00:13:12.213 | 99.99th=[14222] 00:13:12.213 bw ( KiB/s): min=24576, max=28672, per=29.18%, avg=26624.00, stdev=2896.31, samples=2 00:13:12.213 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:13:12.213 lat (msec) : 2=0.01%, 4=0.19%, 10=23.18%, 20=76.62% 00:13:12.213 cpu : usr=2.10%, sys=4.19%, ctx=1788, majf=0, minf=1 00:13:12.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:12.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:12.213 issued rwts: total=6306,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.214 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:12.214 job2: (groupid=0, jobs=1): err= 0: pid=3013384: Wed Apr 24 17:17:21 2024 00:13:12.214 read: IOPS=4648, BW=18.2MiB/s (19.0MB/s)(18.2MiB/1003msec) 00:13:12.214 slat (nsec): min=1387, max=2568.4k, avg=104886.87, stdev=345250.30 00:13:12.214 clat (usec): min=1648, max=19415, avg=13438.81, stdev=1811.67 00:13:12.214 lat (usec): min=3261, max=19418, avg=13543.70, stdev=1787.42 00:13:12.214 clat percentiles (usec): 00:13:12.214 | 1.00th=[ 7242], 5.00th=[12256], 10.00th=[12518], 20.00th=[12911], 00:13:12.214 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:13:12.214 | 70.00th=[13304], 80.00th=[13304], 90.00th=[13566], 95.00th=[18482], 00:13:12.214 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:13:12.214 | 99.99th=[19530] 00:13:12.214 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:13:12.214 slat (nsec): min=1949, max=2277.0k, avg=97310.51, stdev=316366.04 00:13:12.214 clat (usec): min=7977, max=17887, avg=12522.45, stdev=1061.59 00:13:12.214 lat (usec): min=7984, max=17891, avg=12619.76, stdev=1023.69 00:13:12.214 clat percentiles (usec): 00:13:12.214 | 1.00th=[10683], 5.00th=[11600], 10.00th=[11994], 20.00th=[12256], 00:13:12.214 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12387], 00:13:12.214 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12911], 95.00th=[13435], 00:13:12.214 | 99.00th=[17695], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:13:12.214 | 99.99th=[17957] 00:13:12.214 bw ( KiB/s): min=19896, max=20480, per=22.12%, avg=20188.00, stdev=412.95, samples=2 00:13:12.214 iops : min= 4974, max= 5120, avg=5047.00, stdev=103.24, samples=2 00:13:12.214 lat (msec) : 2=0.01%, 4=0.09%, 10=0.66%, 20=99.23% 00:13:12.214 cpu : usr=1.90%, sys=3.69%, ctx=2455, majf=0, minf=1 00:13:12.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:12.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:12.214 issued rwts: total=4662,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.214 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:12.214 job3: (groupid=0, jobs=1): err= 0: pid=3013385: Wed Apr 24 17:17:21 2024 00:13:12.214 read: IOPS=4650, BW=18.2MiB/s (19.0MB/s)(18.2MiB/1003msec) 00:13:12.214 slat (nsec): min=1284, max=3645.3k, avg=104733.63, stdev=294116.42 00:13:12.214 clat (usec): min=2352, max=19609, avg=13462.10, stdev=1842.93 00:13:12.214 lat (usec): min=2357, max=20209, avg=13566.83, stdev=1828.24 00:13:12.214 clat percentiles (usec): 00:13:12.214 | 1.00th=[ 6521], 5.00th=[12387], 10.00th=[12518], 20.00th=[12911], 00:13:12.214 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:13:12.214 | 70.00th=[13304], 80.00th=[13304], 90.00th=[13960], 95.00th=[18744], 00:13:12.214 | 99.00th=[19268], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:13:12.214 | 99.99th=[19530] 00:13:12.214 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:13:12.214 slat (nsec): min=1923, max=2992.1k, avg=97194.68, stdev=270129.04 00:13:12.214 clat (usec): min=8625, max=17878, avg=12491.01, stdev=1038.96 00:13:12.214 lat (usec): min=8633, max=17884, avg=12588.21, stdev=1016.20 00:13:12.214 clat percentiles (usec): 00:13:12.214 | 1.00th=[10683], 5.00th=[11600], 10.00th=[11731], 20.00th=[12125], 00:13:12.214 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12387], 00:13:12.214 | 70.00th=[12518], 80.00th=[12518], 90.00th=[12911], 95.00th=[13173], 00:13:12.214 | 99.00th=[17695], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:13:12.214 | 99.99th=[17957] 00:13:12.214 bw ( KiB/s): min=19912, max=20480, per=22.13%, avg=20196.00, stdev=401.64, samples=2 00:13:12.214 iops : min= 4978, max= 5120, avg=5049.00, stdev=100.41, samples=2 00:13:12.214 lat (msec) : 4=0.15%, 10=0.67%, 20=99.17% 00:13:12.214 cpu : usr=2.00%, sys=3.59%, ctx=1787, majf=0, minf=1 00:13:12.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:12.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:12.214 issued rwts: total=4664,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.214 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:12.214 00:13:12.214 Run status group 0 (all jobs): 00:13:12.214 READ: bw=82.8MiB/s (86.8MB/s), 18.2MiB/s-24.6MiB/s (19.0MB/s-25.8MB/s), io=83.1MiB (87.1MB), run=1003-1003msec 00:13:12.214 WRITE: bw=89.1MiB/s (93.4MB/s), 19.9MiB/s-25.9MiB/s (20.9MB/s-27.2MB/s), io=89.4MiB (93.7MB), run=1003-1003msec 00:13:12.214 00:13:12.214 Disk stats (read/write): 00:13:12.214 nvme0n1: ios=4793/5120, merge=0/0, ticks=13327/13798, in_queue=27125, util=86.87% 00:13:12.214 nvme0n2: ios=5599/5632, merge=0/0, ticks=13796/13289, in_queue=27085, util=87.21% 00:13:12.214 nvme0n3: ios=4096/4224, merge=0/0, ticks=13926/13284, in_queue=27210, util=89.11% 00:13:12.214 nvme0n4: ios=4096/4229, merge=0/0, ticks=14002/13285, in_queue=27287, util=89.46% 00:13:12.214 17:17:21 -- target/fio.sh@55 -- # sync 00:13:12.214 17:17:21 -- target/fio.sh@59 -- # fio_pid=3013402 00:13:12.214 17:17:21 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:12.214 17:17:21 -- target/fio.sh@61 -- # sleep 3 00:13:12.214 [global] 00:13:12.214 thread=1 00:13:12.214 invalidate=1 00:13:12.214 rw=read 00:13:12.214 time_based=1 00:13:12.214 runtime=10 00:13:12.214 ioengine=libaio 00:13:12.214 direct=1 00:13:12.214 bs=4096 00:13:12.214 iodepth=1 00:13:12.214 norandommap=1 00:13:12.214 numjobs=1 00:13:12.214 00:13:12.214 [job0] 00:13:12.214 filename=/dev/nvme0n1 00:13:12.214 [job1] 00:13:12.214 filename=/dev/nvme0n2 00:13:12.214 [job2] 00:13:12.214 filename=/dev/nvme0n3 00:13:12.214 [job3] 00:13:12.214 filename=/dev/nvme0n4 00:13:12.214 Could not set queue depth (nvme0n1) 00:13:12.214 Could not set queue depth (nvme0n2) 00:13:12.214 Could not set queue depth (nvme0n3) 00:13:12.214 Could not set queue depth (nvme0n4) 00:13:12.214 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:12.214 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:12.214 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:12.214 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:12.214 fio-3.35 00:13:12.214 Starting 4 threads 00:13:15.572 17:17:24 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:15.572 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=81584128, buflen=4096 00:13:15.572 fio: pid=3013547, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:15.572 17:17:24 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:15.572 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=103600128, buflen=4096 00:13:15.572 fio: pid=3013546, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:15.572 17:17:24 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:15.572 17:17:24 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:15.572 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=30650368, buflen=4096 00:13:15.572 fio: pid=3013544, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:15.572 17:17:24 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:15.572 17:17:24 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:15.572 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=61743104, buflen=4096 00:13:15.572 fio: pid=3013545, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:15.830 17:17:24 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:15.830 17:17:24 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:15.830 00:13:15.830 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3013544: Wed Apr 24 17:17:24 2024 00:13:15.830 read: IOPS=7843, BW=30.6MiB/s (32.1MB/s)(93.2MiB/3043msec) 00:13:15.830 slat (usec): min=4, max=11979, avg= 8.92, stdev=123.07 00:13:15.831 clat (usec): min=49, max=8721, avg=117.01, stdev=61.96 00:13:15.831 lat (usec): min=56, max=12068, avg=125.93, stdev=137.44 00:13:15.831 clat percentiles (usec): 00:13:15.831 | 1.00th=[ 56], 5.00th=[ 71], 10.00th=[ 75], 20.00th=[ 85], 00:13:15.831 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 126], 00:13:15.831 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 165], 00:13:15.831 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 194], 99.95th=[ 198], 00:13:15.831 | 99.99th=[ 210] 00:13:15.831 bw ( KiB/s): min=29176, max=30456, per=24.05%, avg=29812.80, stdev=570.49, samples=5 00:13:15.831 iops : min= 7294, max= 7614, avg=7453.20, stdev=142.62, samples=5 00:13:15.831 lat (usec) : 50=0.01%, 100=24.09%, 250=75.90% 00:13:15.831 lat (msec) : 10=0.01% 00:13:15.831 cpu : usr=2.53%, sys=9.53%, ctx=23873, majf=0, minf=1 00:13:15.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:15.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.831 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.831 issued rwts: total=23868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:15.831 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3013545: Wed Apr 24 17:17:24 2024 00:13:15.831 read: IOPS=9697, BW=37.9MiB/s (39.7MB/s)(123MiB/3244msec) 00:13:15.831 slat (usec): min=5, max=11928, avg= 8.83, stdev=123.70 00:13:15.831 clat (usec): min=34, max=20750, avg=92.50, stdev=119.28 00:13:15.831 lat (usec): min=55, max=20757, avg=101.33, stdev=171.84 00:13:15.831 clat percentiles (usec): 00:13:15.831 | 1.00th=[ 54], 5.00th=[ 59], 10.00th=[ 68], 20.00th=[ 73], 00:13:15.831 | 30.00th=[ 75], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 86], 00:13:15.831 | 70.00th=[ 117], 80.00th=[ 122], 90.00th=[ 127], 95.00th=[ 130], 00:13:15.831 | 99.00th=[ 147], 99.50th=[ 163], 99.90th=[ 174], 99.95th=[ 180], 00:13:15.831 | 99.99th=[ 192] 00:13:15.831 bw ( KiB/s): min=30200, max=47416, per=31.02%, avg=38453.83, stdev=7376.65, samples=6 00:13:15.831 iops : min= 7550, max=11854, avg=9613.33, stdev=1844.22, samples=6 00:13:15.831 lat (usec) : 50=0.04%, 100=64.14%, 250=35.81% 00:13:15.831 lat (msec) : 2=0.01%, 50=0.01% 00:13:15.831 cpu : usr=2.84%, sys=11.32%, ctx=31468, majf=0, minf=1 00:13:15.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:15.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.831 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.831 issued rwts: total=31459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:15.831 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3013546: Wed Apr 24 17:17:24 2024 00:13:15.831 read: IOPS=8773, BW=34.3MiB/s (35.9MB/s)(98.8MiB/2883msec) 00:13:15.831 slat (nsec): min=1990, max=11898k, avg=8471.88, stdev=89375.77 00:13:15.831 clat (usec): min=54, max=284, avg=103.01, stdev=18.56 00:13:15.831 lat (usec): min=56, max=11987, avg=111.49, stdev=91.35 00:13:15.831 clat percentiles (usec): 00:13:15.831 | 1.00th=[ 77], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 85], 00:13:15.831 | 30.00th=[ 88], 40.00th=[ 91], 50.00th=[ 97], 60.00th=[ 115], 00:13:15.831 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 126], 95.00th=[ 130], 00:13:15.831 | 99.00th=[ 143], 99.50th=[ 151], 99.90th=[ 163], 99.95th=[ 169], 00:13:15.831 | 99.99th=[ 206] 00:13:15.831 bw ( KiB/s): min=30200, max=41696, per=29.07%, avg=36041.60, stdev=5368.49, samples=5 00:13:15.831 iops : min= 7550, max=10424, avg=9010.40, stdev=1342.12, samples=5 00:13:15.831 lat (usec) : 100=52.72%, 250=47.28%, 500=0.01% 00:13:15.831 cpu : usr=2.91%, sys=9.75%, ctx=25296, majf=0, minf=1 00:13:15.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:15.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.831 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.831 issued rwts: total=25294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:15.831 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3013547: Wed Apr 24 17:17:24 2024 00:13:15.831 read: IOPS=7426, BW=29.0MiB/s (30.4MB/s)(77.8MiB/2682msec) 00:13:15.831 slat (nsec): min=6225, max=32237, avg=7087.04, stdev=738.65 00:13:15.831 clat (usec): min=71, max=193, avg=125.94, stdev=14.03 00:13:15.831 lat (usec): min=78, max=201, avg=133.02, stdev=14.04 00:13:15.831 clat percentiles (usec): 00:13:15.831 | 1.00th=[ 87], 5.00th=[ 97], 10.00th=[ 116], 20.00th=[ 120], 00:13:15.831 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 128], 00:13:15.831 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 155], 00:13:15.831 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 182], 99.95th=[ 186], 00:13:15.831 | 99.99th=[ 194] 00:13:15.831 bw ( KiB/s): min=29184, max=30456, per=24.05%, avg=29812.80, stdev=566.30, samples=5 00:13:15.831 iops : min= 7296, max= 7614, avg=7453.20, stdev=141.57, samples=5 00:13:15.831 lat (usec) : 100=5.62%, 250=94.37% 00:13:15.831 cpu : usr=2.09%, sys=8.95%, ctx=19919, majf=0, minf=2 00:13:15.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:15.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.831 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.831 issued rwts: total=19919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:15.831 00:13:15.831 Run status group 0 (all jobs): 00:13:15.831 READ: bw=121MiB/s (127MB/s), 29.0MiB/s-37.9MiB/s (30.4MB/s-39.7MB/s), io=393MiB (412MB), run=2682-3244msec 00:13:15.831 00:13:15.831 Disk stats (read/write): 00:13:15.831 nvme0n1: ios=21896/0, merge=0/0, ticks=2487/0, in_queue=2487, util=95.13% 00:13:15.831 nvme0n2: ios=29822/0, merge=0/0, ticks=2632/0, in_queue=2632, util=94.84% 00:13:15.831 nvme0n3: ios=25293/0, merge=0/0, ticks=2444/0, in_queue=2444, util=95.99% 00:13:15.831 nvme0n4: ios=19470/0, merge=0/0, ticks=2331/0, in_queue=2331, util=96.49% 00:13:15.831 17:17:25 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:15.831 17:17:25 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:16.089 17:17:25 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:16.089 17:17:25 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:16.348 17:17:25 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:16.348 17:17:25 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:16.606 17:17:25 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:16.606 17:17:25 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:16.606 17:17:25 -- target/fio.sh@69 -- # fio_status=0 00:13:16.606 17:17:25 -- target/fio.sh@70 -- # wait 3013402 00:13:16.606 17:17:25 -- target/fio.sh@70 -- # fio_status=4 00:13:16.606 17:17:25 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.540 17:17:26 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.540 17:17:26 -- common/autotest_common.sh@1205 -- # local i=0 00:13:17.540 17:17:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:17.540 17:17:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.540 17:17:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:17.540 17:17:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.540 17:17:26 -- common/autotest_common.sh@1217 -- # return 0 00:13:17.540 17:17:26 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:17.540 17:17:26 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:17.540 nvmf hotplug test: fio failed as expected 00:13:17.540 17:17:26 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.798 17:17:26 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:17.798 17:17:26 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:17.798 17:17:26 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:17.798 17:17:26 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:17.798 17:17:26 -- target/fio.sh@91 -- # nvmftestfini 00:13:17.798 17:17:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:17.798 17:17:26 -- nvmf/common.sh@117 -- # sync 00:13:17.798 17:17:26 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:17.798 17:17:26 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:17.798 17:17:26 -- nvmf/common.sh@120 -- # set +e 00:13:17.798 17:17:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:17.798 17:17:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:17.798 rmmod nvme_rdma 00:13:17.798 rmmod nvme_fabrics 00:13:17.798 17:17:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:17.798 17:17:26 -- nvmf/common.sh@124 -- # set -e 00:13:17.798 17:17:26 -- nvmf/common.sh@125 -- # return 0 00:13:17.798 17:17:26 -- nvmf/common.sh@478 -- # '[' -n 3012608 ']' 00:13:17.798 17:17:26 -- nvmf/common.sh@479 -- # killprocess 3012608 00:13:17.798 17:17:26 -- common/autotest_common.sh@936 -- # '[' -z 3012608 ']' 00:13:17.798 17:17:26 -- common/autotest_common.sh@940 -- # kill -0 3012608 00:13:17.798 17:17:26 -- common/autotest_common.sh@941 -- # uname 00:13:17.798 17:17:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:17.798 17:17:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3012608 00:13:17.798 17:17:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:17.798 17:17:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:17.798 17:17:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3012608' 00:13:17.798 killing process with pid 3012608 00:13:17.798 17:17:26 -- common/autotest_common.sh@955 -- # kill 3012608 00:13:17.798 17:17:26 -- common/autotest_common.sh@960 -- # wait 3012608 00:13:18.057 17:17:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:18.057 17:17:27 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:18.057 00:13:18.057 real 0m24.826s 00:13:18.057 user 1m50.637s 00:13:18.057 sys 0m8.201s 00:13:18.057 17:17:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:18.057 17:17:27 -- common/autotest_common.sh@10 -- # set +x 00:13:18.057 ************************************ 00:13:18.057 END TEST nvmf_fio_target 00:13:18.057 ************************************ 00:13:18.057 17:17:27 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:13:18.057 17:17:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:18.057 17:17:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:18.057 17:17:27 -- common/autotest_common.sh@10 -- # set +x 00:13:18.315 ************************************ 00:13:18.315 START TEST nvmf_bdevio 00:13:18.315 ************************************ 00:13:18.315 17:17:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:13:18.315 * Looking for test storage... 00:13:18.315 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:18.315 17:17:27 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.315 17:17:27 -- nvmf/common.sh@7 -- # uname -s 00:13:18.315 17:17:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.315 17:17:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.315 17:17:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.315 17:17:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.315 17:17:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.315 17:17:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.315 17:17:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.315 17:17:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.315 17:17:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.315 17:17:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.315 17:17:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:18.315 17:17:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:18.315 17:17:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.315 17:17:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.315 17:17:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.316 17:17:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.316 17:17:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:18.316 17:17:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.316 17:17:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.316 17:17:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.316 17:17:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.316 17:17:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.316 17:17:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.316 17:17:27 -- paths/export.sh@5 -- # export PATH 00:13:18.316 17:17:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.316 17:17:27 -- nvmf/common.sh@47 -- # : 0 00:13:18.316 17:17:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.316 17:17:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.316 17:17:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.316 17:17:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.316 17:17:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.316 17:17:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.316 17:17:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.316 17:17:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.316 17:17:27 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.316 17:17:27 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.316 17:17:27 -- target/bdevio.sh@14 -- # nvmftestinit 00:13:18.316 17:17:27 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:18.316 17:17:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.316 17:17:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:18.316 17:17:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:18.316 17:17:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:18.316 17:17:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.316 17:17:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.316 17:17:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.316 17:17:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:18.316 17:17:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:18.316 17:17:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.316 17:17:27 -- common/autotest_common.sh@10 -- # set +x 00:13:23.579 17:17:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:23.579 17:17:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:23.579 17:17:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:23.579 17:17:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:23.580 17:17:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:23.580 17:17:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:23.580 17:17:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:23.580 17:17:32 -- nvmf/common.sh@295 -- # net_devs=() 00:13:23.580 17:17:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:23.580 17:17:32 -- nvmf/common.sh@296 -- # e810=() 00:13:23.580 17:17:32 -- nvmf/common.sh@296 -- # local -ga e810 00:13:23.580 17:17:32 -- nvmf/common.sh@297 -- # x722=() 00:13:23.580 17:17:32 -- nvmf/common.sh@297 -- # local -ga x722 00:13:23.580 17:17:32 -- nvmf/common.sh@298 -- # mlx=() 00:13:23.580 17:17:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:23.580 17:17:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.580 17:17:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.580 17:17:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.580 17:17:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.580 17:17:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.580 17:17:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.580 17:17:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.580 17:17:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.580 17:17:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.580 17:17:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.580 17:17:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.580 17:17:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:23.580 17:17:32 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:23.580 17:17:32 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:23.580 17:17:32 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:23.580 17:17:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:23.580 17:17:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.580 17:17:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:23.580 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:23.580 17:17:32 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:23.580 17:17:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.580 17:17:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:23.580 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:23.580 17:17:32 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:23.580 17:17:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:23.580 17:17:32 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.580 17:17:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.580 17:17:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:23.580 17:17:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.580 17:17:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:23.580 Found net devices under 0000:da:00.0: mlx_0_0 00:13:23.580 17:17:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.580 17:17:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.580 17:17:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.580 17:17:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:23.580 17:17:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.580 17:17:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:23.580 Found net devices under 0000:da:00.1: mlx_0_1 00:13:23.580 17:17:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.580 17:17:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:23.580 17:17:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:23.580 17:17:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:23.580 17:17:32 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:23.580 17:17:32 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:23.580 17:17:32 -- nvmf/common.sh@58 -- # uname 00:13:23.580 17:17:32 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:23.580 17:17:32 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:23.580 17:17:32 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:23.580 17:17:32 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:23.580 17:17:32 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:23.580 17:17:32 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:23.580 17:17:32 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:23.580 17:17:32 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:23.580 17:17:32 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:23.580 17:17:32 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:23.580 17:17:32 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:23.580 17:17:32 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:23.580 17:17:32 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:23.580 17:17:32 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:23.580 17:17:32 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:23.839 17:17:32 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:23.839 17:17:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:23.839 17:17:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.839 17:17:32 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:23.839 17:17:32 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:23.839 17:17:32 -- nvmf/common.sh@105 -- # continue 2 00:13:23.839 17:17:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:23.839 17:17:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.839 17:17:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:23.839 17:17:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.839 17:17:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:23.839 17:17:32 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:23.839 17:17:32 -- nvmf/common.sh@105 -- # continue 2 00:13:23.839 17:17:32 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:23.839 17:17:32 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:23.839 17:17:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:23.839 17:17:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:23.839 17:17:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:23.839 17:17:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:23.839 17:17:32 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:23.839 17:17:32 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:23.839 17:17:32 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:23.839 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:23.839 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:23.839 altname enp218s0f0np0 00:13:23.839 altname ens818f0np0 00:13:23.839 inet 192.168.100.8/24 scope global mlx_0_0 00:13:23.839 valid_lft forever preferred_lft forever 00:13:23.839 17:17:32 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:23.839 17:17:32 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:23.839 17:17:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:23.839 17:17:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:23.839 17:17:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:23.839 17:17:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:23.839 17:17:32 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:23.839 17:17:32 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:23.839 17:17:32 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:23.839 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:23.839 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:23.839 altname enp218s0f1np1 00:13:23.839 altname ens818f1np1 00:13:23.839 inet 192.168.100.9/24 scope global mlx_0_1 00:13:23.839 valid_lft forever preferred_lft forever 00:13:23.839 17:17:32 -- nvmf/common.sh@411 -- # return 0 00:13:23.839 17:17:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:23.839 17:17:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:23.839 17:17:32 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:23.839 17:17:32 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:23.839 17:17:32 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:23.839 17:17:32 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:23.839 17:17:32 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:23.839 17:17:32 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:23.839 17:17:32 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:23.839 17:17:32 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:23.839 17:17:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:23.839 17:17:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.839 17:17:32 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:23.839 17:17:32 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:23.839 17:17:32 -- nvmf/common.sh@105 -- # continue 2 00:13:23.839 17:17:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:23.839 17:17:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.839 17:17:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:23.839 17:17:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.839 17:17:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:23.839 17:17:32 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:23.839 17:17:32 -- nvmf/common.sh@105 -- # continue 2 00:13:23.839 17:17:32 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:23.839 17:17:32 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:23.839 17:17:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:23.839 17:17:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:23.839 17:17:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:23.839 17:17:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:23.839 17:17:32 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:23.839 17:17:32 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:23.839 17:17:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:23.839 17:17:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:23.839 17:17:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:23.839 17:17:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:23.839 17:17:32 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:23.839 192.168.100.9' 00:13:23.839 17:17:32 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:23.839 192.168.100.9' 00:13:23.839 17:17:32 -- nvmf/common.sh@446 -- # head -n 1 00:13:23.839 17:17:32 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:23.839 17:17:32 -- nvmf/common.sh@447 -- # tail -n +2 00:13:23.839 17:17:32 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:23.839 192.168.100.9' 00:13:23.839 17:17:32 -- nvmf/common.sh@447 -- # head -n 1 00:13:23.840 17:17:32 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:23.840 17:17:32 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:23.840 17:17:32 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:23.840 17:17:32 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:23.840 17:17:32 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:23.840 17:17:32 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:23.840 17:17:32 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:23.840 17:17:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:23.840 17:17:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:23.840 17:17:32 -- common/autotest_common.sh@10 -- # set +x 00:13:23.840 17:17:32 -- nvmf/common.sh@470 -- # nvmfpid=3015871 00:13:23.840 17:17:32 -- nvmf/common.sh@471 -- # waitforlisten 3015871 00:13:23.840 17:17:32 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:23.840 17:17:32 -- common/autotest_common.sh@817 -- # '[' -z 3015871 ']' 00:13:23.840 17:17:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.840 17:17:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:23.840 17:17:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.840 17:17:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:23.840 17:17:32 -- common/autotest_common.sh@10 -- # set +x 00:13:23.840 [2024-04-24 17:17:33.015657] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:23.840 [2024-04-24 17:17:33.015704] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.840 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.840 [2024-04-24 17:17:33.071229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.097 [2024-04-24 17:17:33.149052] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.097 [2024-04-24 17:17:33.149088] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.097 [2024-04-24 17:17:33.149095] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.097 [2024-04-24 17:17:33.149101] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.097 [2024-04-24 17:17:33.149106] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.097 [2024-04-24 17:17:33.149213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:24.097 [2024-04-24 17:17:33.149320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:24.097 [2024-04-24 17:17:33.149425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.097 [2024-04-24 17:17:33.149426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:24.660 17:17:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:24.660 17:17:33 -- common/autotest_common.sh@850 -- # return 0 00:13:24.660 17:17:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:24.660 17:17:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:24.660 17:17:33 -- common/autotest_common.sh@10 -- # set +x 00:13:24.660 17:17:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.660 17:17:33 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:24.660 17:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.660 17:17:33 -- common/autotest_common.sh@10 -- # set +x 00:13:24.660 [2024-04-24 17:17:33.884602] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19cb840/0x19cfd30) succeed. 00:13:24.660 [2024-04-24 17:17:33.894785] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19cce30/0x1a113c0) succeed. 00:13:24.916 17:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.916 17:17:34 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:24.916 17:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.916 17:17:34 -- common/autotest_common.sh@10 -- # set +x 00:13:24.916 Malloc0 00:13:24.917 17:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.917 17:17:34 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:24.917 17:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.917 17:17:34 -- common/autotest_common.sh@10 -- # set +x 00:13:24.917 17:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.917 17:17:34 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:24.917 17:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.917 17:17:34 -- common/autotest_common.sh@10 -- # set +x 00:13:24.917 17:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.917 17:17:34 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:24.917 17:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.917 17:17:34 -- common/autotest_common.sh@10 -- # set +x 00:13:24.917 [2024-04-24 17:17:34.064019] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:24.917 17:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.917 17:17:34 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:24.917 17:17:34 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:24.917 17:17:34 -- nvmf/common.sh@521 -- # config=() 00:13:24.917 17:17:34 -- nvmf/common.sh@521 -- # local subsystem config 00:13:24.917 17:17:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:24.917 17:17:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:24.917 { 00:13:24.917 "params": { 00:13:24.917 "name": "Nvme$subsystem", 00:13:24.917 "trtype": "$TEST_TRANSPORT", 00:13:24.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:24.917 "adrfam": "ipv4", 00:13:24.917 "trsvcid": "$NVMF_PORT", 00:13:24.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:24.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:24.917 "hdgst": ${hdgst:-false}, 00:13:24.917 "ddgst": ${ddgst:-false} 00:13:24.917 }, 00:13:24.917 "method": "bdev_nvme_attach_controller" 00:13:24.917 } 00:13:24.917 EOF 00:13:24.917 )") 00:13:24.917 17:17:34 -- nvmf/common.sh@543 -- # cat 00:13:24.917 17:17:34 -- nvmf/common.sh@545 -- # jq . 00:13:24.917 17:17:34 -- nvmf/common.sh@546 -- # IFS=, 00:13:24.917 17:17:34 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:24.917 "params": { 00:13:24.917 "name": "Nvme1", 00:13:24.917 "trtype": "rdma", 00:13:24.917 "traddr": "192.168.100.8", 00:13:24.917 "adrfam": "ipv4", 00:13:24.917 "trsvcid": "4420", 00:13:24.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.917 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:24.917 "hdgst": false, 00:13:24.917 "ddgst": false 00:13:24.917 }, 00:13:24.917 "method": "bdev_nvme_attach_controller" 00:13:24.917 }' 00:13:24.917 [2024-04-24 17:17:34.112359] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:24.917 [2024-04-24 17:17:34.112402] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015908 ] 00:13:24.917 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.173 [2024-04-24 17:17:34.166399] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:25.174 [2024-04-24 17:17:34.241309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.174 [2024-04-24 17:17:34.241408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.174 [2024-04-24 17:17:34.241410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.174 I/O targets: 00:13:25.174 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:25.174 00:13:25.174 00:13:25.174 CUnit - A unit testing framework for C - Version 2.1-3 00:13:25.174 http://cunit.sourceforge.net/ 00:13:25.174 00:13:25.174 00:13:25.174 Suite: bdevio tests on: Nvme1n1 00:13:25.431 Test: blockdev write read block ...passed 00:13:25.431 Test: blockdev write zeroes read block ...passed 00:13:25.431 Test: blockdev write zeroes read no split ...passed 00:13:25.431 Test: blockdev write zeroes read split ...passed 00:13:25.431 Test: blockdev write zeroes read split partial ...passed 00:13:25.431 Test: blockdev reset ...[2024-04-24 17:17:34.445280] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:25.431 [2024-04-24 17:17:34.467975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:13:25.431 [2024-04-24 17:17:34.494718] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:25.431 passed 00:13:25.431 Test: blockdev write read 8 blocks ...passed 00:13:25.431 Test: blockdev write read size > 128k ...passed 00:13:25.431 Test: blockdev write read invalid size ...passed 00:13:25.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:25.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:25.431 Test: blockdev write read max offset ...passed 00:13:25.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:25.431 Test: blockdev writev readv 8 blocks ...passed 00:13:25.431 Test: blockdev writev readv 30 x 1block ...passed 00:13:25.431 Test: blockdev writev readv block ...passed 00:13:25.431 Test: blockdev writev readv size > 128k ...passed 00:13:25.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:25.431 Test: blockdev comparev and writev ...[2024-04-24 17:17:34.497588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:25.431 [2024-04-24 17:17:34.497614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:25.431 [2024-04-24 17:17:34.497623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:25.431 [2024-04-24 17:17:34.497631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:25.431 [2024-04-24 17:17:34.497791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:25.431 [2024-04-24 17:17:34.497800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:25.431 [2024-04-24 17:17:34.497807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:25.431 [2024-04-24 17:17:34.497814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:25.431 [2024-04-24 17:17:34.497965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:25.431 [2024-04-24 17:17:34.497973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:25.431 [2024-04-24 17:17:34.497981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:25.431 [2024-04-24 17:17:34.497987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:25.431 [2024-04-24 17:17:34.498159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:25.431 [2024-04-24 17:17:34.498167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:25.431 [2024-04-24 17:17:34.498175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:25.431 [2024-04-24 17:17:34.498181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:25.431 passed 00:13:25.431 Test: blockdev nvme passthru rw ...passed 00:13:25.431 Test: blockdev nvme passthru vendor specific ...[2024-04-24 17:17:34.498434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:13:25.431 [2024-04-24 17:17:34.498443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:25.431 [2024-04-24 17:17:34.498487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:13:25.431 [2024-04-24 17:17:34.498495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:25.431 [2024-04-24 17:17:34.498538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:13:25.431 [2024-04-24 17:17:34.498546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:25.431 [2024-04-24 17:17:34.498584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:13:25.431 [2024-04-24 17:17:34.498591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:25.431 passed 00:13:25.431 Test: blockdev nvme admin passthru ...passed 00:13:25.431 Test: blockdev copy ...passed 00:13:25.431 00:13:25.431 Run Summary: Type Total Ran Passed Failed Inactive 00:13:25.431 suites 1 1 n/a 0 0 00:13:25.431 tests 23 23 23 0 0 00:13:25.431 asserts 152 152 152 0 n/a 00:13:25.431 00:13:25.431 Elapsed time = 0.171 seconds 00:13:25.689 17:17:34 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.689 17:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:25.689 17:17:34 -- common/autotest_common.sh@10 -- # set +x 00:13:25.689 17:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:25.689 17:17:34 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:25.689 17:17:34 -- target/bdevio.sh@30 -- # nvmftestfini 00:13:25.689 17:17:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:25.689 17:17:34 -- nvmf/common.sh@117 -- # sync 00:13:25.689 17:17:34 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:25.689 17:17:34 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:25.689 17:17:34 -- nvmf/common.sh@120 -- # set +e 00:13:25.689 17:17:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:25.689 17:17:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:25.689 rmmod nvme_rdma 00:13:25.689 rmmod nvme_fabrics 00:13:25.689 17:17:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:25.689 17:17:34 -- nvmf/common.sh@124 -- # set -e 00:13:25.689 17:17:34 -- nvmf/common.sh@125 -- # return 0 00:13:25.689 17:17:34 -- nvmf/common.sh@478 -- # '[' -n 3015871 ']' 00:13:25.689 17:17:34 -- nvmf/common.sh@479 -- # killprocess 3015871 00:13:25.689 17:17:34 -- common/autotest_common.sh@936 -- # '[' -z 3015871 ']' 00:13:25.689 17:17:34 -- common/autotest_common.sh@940 -- # kill -0 3015871 00:13:25.689 17:17:34 -- common/autotest_common.sh@941 -- # uname 00:13:25.689 17:17:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:25.689 17:17:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3015871 00:13:25.689 17:17:34 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:13:25.689 17:17:34 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:13:25.689 17:17:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3015871' 00:13:25.689 killing process with pid 3015871 00:13:25.689 17:17:34 -- common/autotest_common.sh@955 -- # kill 3015871 00:13:25.689 17:17:34 -- common/autotest_common.sh@960 -- # wait 3015871 00:13:25.947 17:17:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:25.947 17:17:35 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:25.947 00:13:25.947 real 0m7.713s 00:13:25.947 user 0m10.445s 00:13:25.947 sys 0m4.626s 00:13:25.947 17:17:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:25.947 17:17:35 -- common/autotest_common.sh@10 -- # set +x 00:13:25.947 ************************************ 00:13:25.947 END TEST nvmf_bdevio 00:13:25.947 ************************************ 00:13:25.947 17:17:35 -- nvmf/nvmf.sh@58 -- # '[' rdma = tcp ']' 00:13:25.947 17:17:35 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:13:25.947 17:17:35 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:13:25.947 17:17:35 -- nvmf/nvmf.sh@71 -- # '[' rdma = tcp ']' 00:13:25.947 17:17:35 -- nvmf/nvmf.sh@77 -- # [[ rdma == \r\d\m\a ]] 00:13:25.947 17:17:35 -- nvmf/nvmf.sh@78 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:13:25.947 17:17:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:25.947 17:17:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:25.947 17:17:35 -- common/autotest_common.sh@10 -- # set +x 00:13:26.207 ************************************ 00:13:26.207 START TEST nvmf_device_removal 00:13:26.207 ************************************ 00:13:26.207 17:17:35 -- common/autotest_common.sh@1111 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:13:26.207 * Looking for test storage... 00:13:26.207 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:26.207 17:17:35 -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:13:26.207 17:17:35 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:26.207 17:17:35 -- common/autotest_common.sh@34 -- # set -e 00:13:26.207 17:17:35 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:26.207 17:17:35 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:26.207 17:17:35 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:13:26.207 17:17:35 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:26.207 17:17:35 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:13:26.207 17:17:35 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:26.207 17:17:35 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:26.207 17:17:35 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:26.207 17:17:35 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:26.207 17:17:35 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:26.207 17:17:35 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:26.207 17:17:35 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:26.207 17:17:35 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:26.207 17:17:35 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:26.207 17:17:35 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:26.207 17:17:35 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:26.207 17:17:35 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:26.207 17:17:35 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:26.207 17:17:35 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:26.207 17:17:35 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:26.207 17:17:35 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:26.207 17:17:35 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:26.207 17:17:35 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:26.207 17:17:35 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:13:26.207 17:17:35 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:26.207 17:17:35 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:26.207 17:17:35 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:26.207 17:17:35 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:26.207 17:17:35 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:26.207 17:17:35 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:26.207 17:17:35 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:26.207 17:17:35 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:26.207 17:17:35 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:26.207 17:17:35 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:26.207 17:17:35 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:26.207 17:17:35 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:26.207 17:17:35 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:26.207 17:17:35 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:26.207 17:17:35 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:26.207 17:17:35 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:26.208 17:17:35 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:13:26.208 17:17:35 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:26.208 17:17:35 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:26.208 17:17:35 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:26.208 17:17:35 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:26.208 17:17:35 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:26.208 17:17:35 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:26.208 17:17:35 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:26.208 17:17:35 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:26.208 17:17:35 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:26.208 17:17:35 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:13:26.208 17:17:35 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:13:26.208 17:17:35 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:26.208 17:17:35 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:13:26.208 17:17:35 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:13:26.208 17:17:35 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:13:26.208 17:17:35 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:13:26.208 17:17:35 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:13:26.208 17:17:35 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:13:26.208 17:17:35 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:13:26.208 17:17:35 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:13:26.208 17:17:35 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:13:26.208 17:17:35 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:13:26.208 17:17:35 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:13:26.208 17:17:35 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:13:26.208 17:17:35 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:13:26.208 17:17:35 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:13:26.208 17:17:35 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:13:26.208 17:17:35 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:13:26.208 17:17:35 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:13:26.208 17:17:35 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:13:26.208 17:17:35 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:13:26.208 17:17:35 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:26.208 17:17:35 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:13:26.208 17:17:35 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:13:26.208 17:17:35 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:13:26.208 17:17:35 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:13:26.208 17:17:35 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:13:26.208 17:17:35 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:13:26.208 17:17:35 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:13:26.208 17:17:35 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:13:26.208 17:17:35 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:13:26.208 17:17:35 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:13:26.208 17:17:35 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:13:26.208 17:17:35 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:26.208 17:17:35 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:13:26.208 17:17:35 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:13:26.208 17:17:35 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:13:26.208 17:17:35 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:13:26.208 17:17:35 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:13:26.208 17:17:35 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:13:26.208 17:17:35 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:13:26.208 17:17:35 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:13:26.208 17:17:35 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:13:26.208 17:17:35 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:13:26.208 17:17:35 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:26.208 17:17:35 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:26.208 17:17:35 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:26.208 17:17:35 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:26.208 17:17:35 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:26.208 17:17:35 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:26.208 17:17:35 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:13:26.208 17:17:35 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:26.208 #define SPDK_CONFIG_H 00:13:26.208 #define SPDK_CONFIG_APPS 1 00:13:26.208 #define SPDK_CONFIG_ARCH native 00:13:26.208 #undef SPDK_CONFIG_ASAN 00:13:26.208 #undef SPDK_CONFIG_AVAHI 00:13:26.208 #undef SPDK_CONFIG_CET 00:13:26.208 #define SPDK_CONFIG_COVERAGE 1 00:13:26.208 #define SPDK_CONFIG_CROSS_PREFIX 00:13:26.208 #undef SPDK_CONFIG_CRYPTO 00:13:26.208 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:26.208 #undef SPDK_CONFIG_CUSTOMOCF 00:13:26.208 #undef SPDK_CONFIG_DAOS 00:13:26.208 #define SPDK_CONFIG_DAOS_DIR 00:13:26.208 #define SPDK_CONFIG_DEBUG 1 00:13:26.208 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:26.208 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:13:26.208 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:26.208 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:26.208 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:26.208 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:13:26.208 #define SPDK_CONFIG_EXAMPLES 1 00:13:26.208 #undef SPDK_CONFIG_FC 00:13:26.208 #define SPDK_CONFIG_FC_PATH 00:13:26.208 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:26.208 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:26.208 #undef SPDK_CONFIG_FUSE 00:13:26.208 #undef SPDK_CONFIG_FUZZER 00:13:26.208 #define SPDK_CONFIG_FUZZER_LIB 00:13:26.208 #undef SPDK_CONFIG_GOLANG 00:13:26.208 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:26.208 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:26.208 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:26.208 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:13:26.208 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:26.208 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:26.208 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:26.208 #define SPDK_CONFIG_IDXD 1 00:13:26.208 #undef SPDK_CONFIG_IDXD_KERNEL 00:13:26.208 #undef SPDK_CONFIG_IPSEC_MB 00:13:26.208 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:26.208 #define SPDK_CONFIG_ISAL 1 00:13:26.208 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:26.208 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:26.208 #define SPDK_CONFIG_LIBDIR 00:13:26.208 #undef SPDK_CONFIG_LTO 00:13:26.208 #define SPDK_CONFIG_MAX_LCORES 00:13:26.208 #define SPDK_CONFIG_NVME_CUSE 1 00:13:26.208 #undef SPDK_CONFIG_OCF 00:13:26.208 #define SPDK_CONFIG_OCF_PATH 00:13:26.208 #define SPDK_CONFIG_OPENSSL_PATH 00:13:26.208 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:26.208 #define SPDK_CONFIG_PGO_DIR 00:13:26.208 #undef SPDK_CONFIG_PGO_USE 00:13:26.208 #define SPDK_CONFIG_PREFIX /usr/local 00:13:26.208 #undef SPDK_CONFIG_RAID5F 00:13:26.208 #undef SPDK_CONFIG_RBD 00:13:26.208 #define SPDK_CONFIG_RDMA 1 00:13:26.208 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:26.208 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:26.208 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:26.208 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:26.208 #define SPDK_CONFIG_SHARED 1 00:13:26.208 #undef SPDK_CONFIG_SMA 00:13:26.208 #define SPDK_CONFIG_TESTS 1 00:13:26.208 #undef SPDK_CONFIG_TSAN 00:13:26.208 #define SPDK_CONFIG_UBLK 1 00:13:26.208 #define SPDK_CONFIG_UBSAN 1 00:13:26.208 #undef SPDK_CONFIG_UNIT_TESTS 00:13:26.208 #undef SPDK_CONFIG_URING 00:13:26.208 #define SPDK_CONFIG_URING_PATH 00:13:26.208 #undef SPDK_CONFIG_URING_ZNS 00:13:26.208 #undef SPDK_CONFIG_USDT 00:13:26.208 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:26.208 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:26.208 #undef SPDK_CONFIG_VFIO_USER 00:13:26.208 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:26.208 #define SPDK_CONFIG_VHOST 1 00:13:26.208 #define SPDK_CONFIG_VIRTIO 1 00:13:26.208 #undef SPDK_CONFIG_VTUNE 00:13:26.208 #define SPDK_CONFIG_VTUNE_DIR 00:13:26.208 #define SPDK_CONFIG_WERROR 1 00:13:26.208 #define SPDK_CONFIG_WPDK_DIR 00:13:26.208 #undef SPDK_CONFIG_XNVME 00:13:26.208 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:26.208 17:17:35 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:26.208 17:17:35 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:26.208 17:17:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.208 17:17:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.208 17:17:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.208 17:17:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.208 17:17:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.208 17:17:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.208 17:17:35 -- paths/export.sh@5 -- # export PATH 00:13:26.209 17:17:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.209 17:17:35 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:13:26.209 17:17:35 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:13:26.209 17:17:35 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:13:26.209 17:17:35 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:13:26.209 17:17:35 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:26.209 17:17:35 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:13:26.209 17:17:35 -- pm/common@67 -- # TEST_TAG=N/A 00:13:26.209 17:17:35 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:13:26.209 17:17:35 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:13:26.209 17:17:35 -- pm/common@71 -- # uname -s 00:13:26.209 17:17:35 -- pm/common@71 -- # PM_OS=Linux 00:13:26.209 17:17:35 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:26.209 17:17:35 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:13:26.209 17:17:35 -- pm/common@76 -- # [[ Linux == Linux ]] 00:13:26.209 17:17:35 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:13:26.209 17:17:35 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:13:26.209 17:17:35 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:26.209 17:17:35 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:26.209 17:17:35 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:13:26.209 17:17:35 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:13:26.209 17:17:35 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:13:26.209 17:17:35 -- common/autotest_common.sh@57 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:13:26.209 17:17:35 -- common/autotest_common.sh@61 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:26.209 17:17:35 -- common/autotest_common.sh@63 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:13:26.209 17:17:35 -- common/autotest_common.sh@65 -- # : 1 00:13:26.209 17:17:35 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:26.209 17:17:35 -- common/autotest_common.sh@67 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:13:26.209 17:17:35 -- common/autotest_common.sh@69 -- # : 00:13:26.209 17:17:35 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:13:26.209 17:17:35 -- common/autotest_common.sh@71 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:13:26.209 17:17:35 -- common/autotest_common.sh@73 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:13:26.209 17:17:35 -- common/autotest_common.sh@75 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:13:26.209 17:17:35 -- common/autotest_common.sh@77 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:26.209 17:17:35 -- common/autotest_common.sh@79 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:13:26.209 17:17:35 -- common/autotest_common.sh@81 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:13:26.209 17:17:35 -- common/autotest_common.sh@83 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:13:26.209 17:17:35 -- common/autotest_common.sh@85 -- # : 1 00:13:26.209 17:17:35 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:13:26.209 17:17:35 -- common/autotest_common.sh@87 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:13:26.209 17:17:35 -- common/autotest_common.sh@89 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:13:26.209 17:17:35 -- common/autotest_common.sh@91 -- # : 1 00:13:26.209 17:17:35 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:13:26.209 17:17:35 -- common/autotest_common.sh@93 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:13:26.209 17:17:35 -- common/autotest_common.sh@95 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:26.209 17:17:35 -- common/autotest_common.sh@97 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:13:26.209 17:17:35 -- common/autotest_common.sh@99 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:13:26.209 17:17:35 -- common/autotest_common.sh@101 -- # : rdma 00:13:26.209 17:17:35 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:26.209 17:17:35 -- common/autotest_common.sh@103 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:13:26.209 17:17:35 -- common/autotest_common.sh@105 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:13:26.209 17:17:35 -- common/autotest_common.sh@107 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:13:26.209 17:17:35 -- common/autotest_common.sh@109 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:13:26.209 17:17:35 -- common/autotest_common.sh@111 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:13:26.209 17:17:35 -- common/autotest_common.sh@113 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:13:26.209 17:17:35 -- common/autotest_common.sh@115 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:13:26.209 17:17:35 -- common/autotest_common.sh@117 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:26.209 17:17:35 -- common/autotest_common.sh@119 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:13:26.209 17:17:35 -- common/autotest_common.sh@121 -- # : 1 00:13:26.209 17:17:35 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:13:26.209 17:17:35 -- common/autotest_common.sh@123 -- # : 00:13:26.209 17:17:35 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:26.209 17:17:35 -- common/autotest_common.sh@125 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:13:26.209 17:17:35 -- common/autotest_common.sh@127 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:13:26.209 17:17:35 -- common/autotest_common.sh@129 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:13:26.209 17:17:35 -- common/autotest_common.sh@131 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:13:26.209 17:17:35 -- common/autotest_common.sh@133 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:13:26.209 17:17:35 -- common/autotest_common.sh@135 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:13:26.209 17:17:35 -- common/autotest_common.sh@137 -- # : 00:13:26.209 17:17:35 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:13:26.209 17:17:35 -- common/autotest_common.sh@139 -- # : true 00:13:26.209 17:17:35 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:13:26.209 17:17:35 -- common/autotest_common.sh@141 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:13:26.209 17:17:35 -- common/autotest_common.sh@143 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:13:26.209 17:17:35 -- common/autotest_common.sh@145 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:13:26.209 17:17:35 -- common/autotest_common.sh@147 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:13:26.209 17:17:35 -- common/autotest_common.sh@149 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:13:26.209 17:17:35 -- common/autotest_common.sh@151 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:13:26.209 17:17:35 -- common/autotest_common.sh@153 -- # : mlx5 00:13:26.209 17:17:35 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:13:26.209 17:17:35 -- common/autotest_common.sh@155 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:13:26.209 17:17:35 -- common/autotest_common.sh@157 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:13:26.209 17:17:35 -- common/autotest_common.sh@159 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:13:26.209 17:17:35 -- common/autotest_common.sh@161 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:13:26.209 17:17:35 -- common/autotest_common.sh@163 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:13:26.209 17:17:35 -- common/autotest_common.sh@166 -- # : 00:13:26.209 17:17:35 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:13:26.209 17:17:35 -- common/autotest_common.sh@168 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:13:26.209 17:17:35 -- common/autotest_common.sh@170 -- # : 0 00:13:26.209 17:17:35 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:26.209 17:17:35 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:13:26.209 17:17:35 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:13:26.209 17:17:35 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:13:26.209 17:17:35 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:13:26.209 17:17:35 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:26.209 17:17:35 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:26.209 17:17:35 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:26.210 17:17:35 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:26.210 17:17:35 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:26.210 17:17:35 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:26.210 17:17:35 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:13:26.210 17:17:35 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:13:26.210 17:17:35 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:26.210 17:17:35 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:13:26.210 17:17:35 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:26.210 17:17:35 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:26.210 17:17:35 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:26.210 17:17:35 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:26.210 17:17:35 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:26.210 17:17:35 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:13:26.210 17:17:35 -- common/autotest_common.sh@199 -- # cat 00:13:26.210 17:17:35 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:13:26.210 17:17:35 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:26.210 17:17:35 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:26.210 17:17:35 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:26.210 17:17:35 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:26.210 17:17:35 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:13:26.210 17:17:35 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:13:26.210 17:17:35 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:13:26.210 17:17:35 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:13:26.210 17:17:35 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:13:26.210 17:17:35 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:13:26.210 17:17:35 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:26.210 17:17:35 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:26.210 17:17:35 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:26.210 17:17:35 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:26.210 17:17:35 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:26.210 17:17:35 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:26.210 17:17:35 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:26.210 17:17:35 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:26.210 17:17:35 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:13:26.210 17:17:35 -- common/autotest_common.sh@252 -- # export valgrind= 00:13:26.210 17:17:35 -- common/autotest_common.sh@252 -- # valgrind= 00:13:26.210 17:17:35 -- common/autotest_common.sh@258 -- # uname -s 00:13:26.210 17:17:35 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:13:26.210 17:17:35 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:13:26.210 17:17:35 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:13:26.210 17:17:35 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:13:26.210 17:17:35 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:13:26.210 17:17:35 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:13:26.210 17:17:35 -- common/autotest_common.sh@268 -- # MAKE=make 00:13:26.210 17:17:35 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j96 00:13:26.210 17:17:35 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:13:26.210 17:17:35 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:13:26.210 17:17:35 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:13:26.210 17:17:35 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:13:26.210 17:17:35 -- common/autotest_common.sh@289 -- # for i in "$@" 00:13:26.210 17:17:35 -- common/autotest_common.sh@290 -- # case "$i" in 00:13:26.210 17:17:35 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=rdma 00:13:26.210 17:17:35 -- common/autotest_common.sh@307 -- # [[ -z 3015975 ]] 00:13:26.210 17:17:35 -- common/autotest_common.sh@307 -- # kill -0 3015975 00:13:26.210 17:17:35 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:13:26.210 17:17:35 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:13:26.210 17:17:35 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:13:26.210 17:17:35 -- common/autotest_common.sh@320 -- # local mount target_dir 00:13:26.210 17:17:35 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:13:26.210 17:17:35 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:13:26.210 17:17:35 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:13:26.210 17:17:35 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:13:26.210 17:17:35 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.ExVfzj 00:13:26.210 17:17:35 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:26.210 17:17:35 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:13:26.210 17:17:35 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:13:26.210 17:17:35 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ExVfzj/tests/target /tmp/spdk.ExVfzj 00:13:26.210 17:17:35 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:13:26.210 17:17:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:26.469 17:17:35 -- common/autotest_common.sh@316 -- # df -T 00:13:26.469 17:17:35 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:13:26.469 17:17:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:13:26.469 17:17:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:13:26.469 17:17:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:13:26.469 17:17:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:13:26.469 17:17:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:13:26.469 17:17:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:26.469 17:17:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:13:26.469 17:17:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:13:26.469 17:17:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:13:26.469 17:17:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:13:26.469 17:17:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:13:26.469 17:17:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:26.469 17:17:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:13:26.469 17:17:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:13:26.469 17:17:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=182628806656 00:13:26.469 17:17:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=195974299648 00:13:26.469 17:17:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=13345492992 00:13:26.469 17:17:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:26.469 17:17:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:13:26.469 17:17:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:13:26.469 17:17:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=97978810368 00:13:26.469 17:17:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=97987149824 00:13:26.469 17:17:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=8339456 00:13:26.469 17:17:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:26.469 17:17:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:13:26.469 17:17:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:13:26.469 17:17:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=39171825664 00:13:26.469 17:17:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=39194861568 00:13:26.469 17:17:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=23035904 00:13:26.469 17:17:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:26.469 17:17:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:13:26.469 17:17:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:13:26.469 17:17:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=97985867776 00:13:26.469 17:17:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=97987149824 00:13:26.469 17:17:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=1282048 00:13:26.469 17:17:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:26.469 17:17:35 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:13:26.469 17:17:35 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:13:26.469 17:17:35 -- common/autotest_common.sh@351 -- # avails["$mount"]=19597422592 00:13:26.469 17:17:35 -- common/autotest_common.sh@351 -- # sizes["$mount"]=19597426688 00:13:26.469 17:17:35 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:13:26.469 17:17:35 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:26.469 17:17:35 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:13:26.469 * Looking for test storage... 00:13:26.469 17:17:35 -- common/autotest_common.sh@357 -- # local target_space new_size 00:13:26.469 17:17:35 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:13:26.469 17:17:35 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:26.469 17:17:35 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:26.469 17:17:35 -- common/autotest_common.sh@361 -- # mount=/ 00:13:26.469 17:17:35 -- common/autotest_common.sh@363 -- # target_space=182628806656 00:13:26.469 17:17:35 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:13:26.469 17:17:35 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:13:26.469 17:17:35 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:13:26.469 17:17:35 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:13:26.469 17:17:35 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:13:26.469 17:17:35 -- common/autotest_common.sh@370 -- # new_size=15560085504 00:13:26.469 17:17:35 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:26.469 17:17:35 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:26.469 17:17:35 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:26.469 17:17:35 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:26.469 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:26.469 17:17:35 -- common/autotest_common.sh@378 -- # return 0 00:13:26.469 17:17:35 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:13:26.469 17:17:35 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:13:26.469 17:17:35 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:26.469 17:17:35 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:26.469 17:17:35 -- common/autotest_common.sh@1673 -- # true 00:13:26.469 17:17:35 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:13:26.469 17:17:35 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:13:26.469 17:17:35 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:13:26.469 17:17:35 -- common/autotest_common.sh@27 -- # exec 00:13:26.469 17:17:35 -- common/autotest_common.sh@29 -- # exec 00:13:26.469 17:17:35 -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:26.469 17:17:35 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:26.469 17:17:35 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:26.469 17:17:35 -- common/autotest_common.sh@18 -- # set -x 00:13:26.469 17:17:35 -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.469 17:17:35 -- nvmf/common.sh@7 -- # uname -s 00:13:26.469 17:17:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.469 17:17:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.469 17:17:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.469 17:17:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.469 17:17:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.469 17:17:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.469 17:17:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.469 17:17:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.469 17:17:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.469 17:17:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.469 17:17:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:26.469 17:17:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:26.469 17:17:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.469 17:17:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.469 17:17:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.469 17:17:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.469 17:17:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:26.469 17:17:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.469 17:17:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.469 17:17:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.469 17:17:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.469 17:17:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.469 17:17:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.470 17:17:35 -- paths/export.sh@5 -- # export PATH 00:13:26.470 17:17:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.470 17:17:35 -- nvmf/common.sh@47 -- # : 0 00:13:26.470 17:17:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.470 17:17:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.470 17:17:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.470 17:17:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.470 17:17:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.470 17:17:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.470 17:17:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.470 17:17:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.470 17:17:35 -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:13:26.470 17:17:35 -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:13:26.470 17:17:35 -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:26.470 17:17:35 -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:13:26.470 17:17:35 -- target/device_removal.sh@18 -- # nvmftestinit 00:13:26.470 17:17:35 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:26.470 17:17:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.470 17:17:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:26.470 17:17:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:26.470 17:17:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:26.470 17:17:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.470 17:17:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.470 17:17:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.470 17:17:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:26.470 17:17:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:26.470 17:17:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.470 17:17:35 -- common/autotest_common.sh@10 -- # set +x 00:13:31.730 17:17:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:31.730 17:17:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:31.730 17:17:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:31.730 17:17:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:31.730 17:17:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:31.730 17:17:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:31.730 17:17:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:31.730 17:17:40 -- nvmf/common.sh@295 -- # net_devs=() 00:13:31.730 17:17:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:31.730 17:17:40 -- nvmf/common.sh@296 -- # e810=() 00:13:31.730 17:17:40 -- nvmf/common.sh@296 -- # local -ga e810 00:13:31.730 17:17:40 -- nvmf/common.sh@297 -- # x722=() 00:13:31.730 17:17:40 -- nvmf/common.sh@297 -- # local -ga x722 00:13:31.730 17:17:40 -- nvmf/common.sh@298 -- # mlx=() 00:13:31.730 17:17:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:31.730 17:17:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.730 17:17:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.730 17:17:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.730 17:17:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.730 17:17:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.730 17:17:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.730 17:17:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.730 17:17:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.730 17:17:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.730 17:17:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.730 17:17:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.730 17:17:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:31.730 17:17:40 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:31.730 17:17:40 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:31.730 17:17:40 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:31.730 17:17:40 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:31.730 17:17:40 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:31.730 17:17:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:31.730 17:17:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.730 17:17:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:31.730 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:31.730 17:17:40 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:31.730 17:17:40 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:31.730 17:17:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:31.730 17:17:40 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:31.730 17:17:40 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:31.730 17:17:40 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:31.730 17:17:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.730 17:17:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:31.730 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:31.730 17:17:40 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:31.730 17:17:40 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:31.731 17:17:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:31.731 17:17:40 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.731 17:17:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.731 17:17:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:31.731 17:17:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.731 17:17:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:31.731 Found net devices under 0000:da:00.0: mlx_0_0 00:13:31.731 17:17:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.731 17:17:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.731 17:17:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.731 17:17:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:31.731 17:17:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.731 17:17:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:31.731 Found net devices under 0000:da:00.1: mlx_0_1 00:13:31.731 17:17:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.731 17:17:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:31.731 17:17:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:31.731 17:17:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:31.731 17:17:40 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:31.731 17:17:40 -- nvmf/common.sh@58 -- # uname 00:13:31.731 17:17:40 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:31.731 17:17:40 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:31.731 17:17:40 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:31.731 17:17:40 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:31.731 17:17:40 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:31.731 17:17:40 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:31.731 17:17:40 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:31.731 17:17:40 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:31.731 17:17:40 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:31.731 17:17:40 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:31.731 17:17:40 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:31.731 17:17:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:31.731 17:17:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:31.731 17:17:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:31.731 17:17:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:31.731 17:17:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:31.731 17:17:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:31.731 17:17:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:31.731 17:17:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:31.731 17:17:40 -- nvmf/common.sh@105 -- # continue 2 00:13:31.731 17:17:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:31.731 17:17:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:31.731 17:17:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:31.731 17:17:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:31.731 17:17:40 -- nvmf/common.sh@105 -- # continue 2 00:13:31.731 17:17:40 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:31.731 17:17:40 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:31.731 17:17:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:31.731 17:17:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:31.731 17:17:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:31.731 17:17:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:31.731 17:17:40 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:31.731 17:17:40 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:31.731 430: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:31.731 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:31.731 altname enp218s0f0np0 00:13:31.731 altname ens818f0np0 00:13:31.731 inet 192.168.100.8/24 scope global mlx_0_0 00:13:31.731 valid_lft forever preferred_lft forever 00:13:31.731 17:17:40 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:31.731 17:17:40 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:31.731 17:17:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:31.731 17:17:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:31.731 17:17:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:31.731 17:17:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:31.731 17:17:40 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:31.731 17:17:40 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:31.731 431: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:31.731 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:31.731 altname enp218s0f1np1 00:13:31.731 altname ens818f1np1 00:13:31.731 inet 192.168.100.9/24 scope global mlx_0_1 00:13:31.731 valid_lft forever preferred_lft forever 00:13:31.731 17:17:40 -- nvmf/common.sh@411 -- # return 0 00:13:31.731 17:17:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:31.731 17:17:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:31.731 17:17:40 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:31.731 17:17:40 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:31.731 17:17:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:31.731 17:17:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:31.731 17:17:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:31.731 17:17:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:31.731 17:17:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:31.731 17:17:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:31.731 17:17:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:31.731 17:17:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:31.731 17:17:40 -- nvmf/common.sh@105 -- # continue 2 00:13:31.731 17:17:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:31.731 17:17:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:31.731 17:17:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:31.731 17:17:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:31.731 17:17:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:31.731 17:17:40 -- nvmf/common.sh@105 -- # continue 2 00:13:31.731 17:17:40 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:31.731 17:17:40 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:31.731 17:17:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:31.731 17:17:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:31.731 17:17:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:31.731 17:17:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:31.731 17:17:40 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:31.731 17:17:40 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:31.731 17:17:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:31.731 17:17:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:31.731 17:17:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:31.731 17:17:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:31.731 17:17:40 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:31.731 192.168.100.9' 00:13:31.731 17:17:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:31.731 192.168.100.9' 00:13:31.731 17:17:40 -- nvmf/common.sh@446 -- # head -n 1 00:13:31.731 17:17:40 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:31.731 17:17:40 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:31.731 192.168.100.9' 00:13:31.731 17:17:40 -- nvmf/common.sh@447 -- # tail -n +2 00:13:31.731 17:17:40 -- nvmf/common.sh@447 -- # head -n 1 00:13:31.731 17:17:40 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:31.731 17:17:40 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:31.731 17:17:40 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:31.731 17:17:40 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:31.731 17:17:40 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:31.731 17:17:40 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:31.731 17:17:40 -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:13:31.731 17:17:40 -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:13:31.731 17:17:40 -- target/device_removal.sh@237 -- # BOND_MASK=24 00:13:31.731 17:17:40 -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:13:31.731 17:17:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:31.731 17:17:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:31.731 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:13:31.731 ************************************ 00:13:31.731 START TEST nvmf_device_removal_pci_remove_no_srq 00:13:31.731 ************************************ 00:13:31.731 17:17:40 -- common/autotest_common.sh@1111 -- # test_remove_and_rescan --no-srq 00:13:31.731 17:17:40 -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:13:31.731 17:17:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:31.731 17:17:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:31.731 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:13:31.731 17:17:40 -- nvmf/common.sh@470 -- # nvmfpid=3018181 00:13:31.731 17:17:40 -- nvmf/common.sh@471 -- # waitforlisten 3018181 00:13:31.732 17:17:40 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:31.732 17:17:40 -- common/autotest_common.sh@817 -- # '[' -z 3018181 ']' 00:13:31.732 17:17:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.732 17:17:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:31.732 17:17:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.732 17:17:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:31.732 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:13:31.990 [2024-04-24 17:17:40.986975] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:31.990 [2024-04-24 17:17:40.987017] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.990 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.990 [2024-04-24 17:17:41.038783] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:31.990 [2024-04-24 17:17:41.115503] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.990 [2024-04-24 17:17:41.115542] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.990 [2024-04-24 17:17:41.115549] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.990 [2024-04-24 17:17:41.115555] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.990 [2024-04-24 17:17:41.115559] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.990 [2024-04-24 17:17:41.115616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.990 [2024-04-24 17:17:41.115618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.555 17:17:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:32.555 17:17:41 -- common/autotest_common.sh@850 -- # return 0 00:13:32.555 17:17:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:32.555 17:17:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:32.555 17:17:41 -- common/autotest_common.sh@10 -- # set +x 00:13:32.813 17:17:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.813 17:17:41 -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:13:32.813 17:17:41 -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:13:32.813 17:17:41 -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:13:32.813 17:17:41 -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:13:32.813 17:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.813 17:17:41 -- common/autotest_common.sh@10 -- # set +x 00:13:32.813 [2024-04-24 17:17:41.841951] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ec58b0/0x1ec9da0) succeed. 00:13:32.813 [2024-04-24 17:17:41.850741] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ec6db0/0x1f0b430) succeed. 00:13:32.813 17:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.813 17:17:41 -- target/device_removal.sh@49 -- # get_rdma_if_list 00:13:32.813 17:17:41 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:32.813 17:17:41 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:32.813 17:17:41 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:32.813 17:17:41 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:32.813 17:17:41 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:32.813 17:17:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:32.813 17:17:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.813 17:17:41 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:32.813 17:17:41 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:32.813 17:17:41 -- nvmf/common.sh@105 -- # continue 2 00:13:32.813 17:17:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:32.813 17:17:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.813 17:17:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:32.813 17:17:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:32.813 17:17:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:32.813 17:17:41 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:32.813 17:17:41 -- nvmf/common.sh@105 -- # continue 2 00:13:32.813 17:17:41 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:13:32.813 17:17:41 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:13:32.813 17:17:41 -- target/device_removal.sh@25 -- # local -a dev_name 00:13:32.813 17:17:41 -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:13:32.813 17:17:41 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:13:32.813 17:17:41 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:13:32.813 17:17:41 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:13:32.813 17:17:41 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:13:32.813 17:17:41 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:13:32.813 17:17:41 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:32.813 17:17:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:32.813 17:17:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:32.813 17:17:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:32.813 17:17:41 -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:13:32.813 17:17:41 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:13:32.813 17:17:41 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:13:32.813 17:17:41 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:13:32.813 17:17:41 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:13:32.813 17:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.813 17:17:41 -- common/autotest_common.sh@10 -- # set +x 00:13:32.813 17:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.813 17:17:41 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:13:32.813 17:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.813 17:17:41 -- common/autotest_common.sh@10 -- # set +x 00:13:32.813 17:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.813 17:17:41 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:13:32.813 17:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.813 17:17:41 -- common/autotest_common.sh@10 -- # set +x 00:13:32.813 17:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.814 17:17:41 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:13:32.814 17:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.814 17:17:41 -- common/autotest_common.sh@10 -- # set +x 00:13:32.814 [2024-04-24 17:17:41.984329] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:32.814 17:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.814 17:17:41 -- target/device_removal.sh@41 -- # return 0 00:13:32.814 17:17:41 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:13:32.814 17:17:41 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:13:32.814 17:17:41 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:13:32.814 17:17:41 -- target/device_removal.sh@25 -- # local -a dev_name 00:13:32.814 17:17:41 -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:13:32.814 17:17:41 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:13:32.814 17:17:41 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:13:32.814 17:17:41 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:13:32.814 17:17:41 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:13:32.814 17:17:41 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:13:32.814 17:17:41 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:32.814 17:17:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:32.814 17:17:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:32.814 17:17:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:32.814 17:17:42 -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:13:32.814 17:17:42 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:13:32.814 17:17:42 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:13:32.814 17:17:42 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:13:32.814 17:17:42 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:13:32.814 17:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.814 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:13:32.814 17:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.814 17:17:42 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:13:32.814 17:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.814 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:13:32.814 17:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.814 17:17:42 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:13:32.814 17:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.814 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:13:32.814 17:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.814 17:17:42 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:13:32.814 17:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.814 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:13:33.071 [2024-04-24 17:17:42.063520] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:13:33.071 17:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.071 17:17:42 -- target/device_removal.sh@41 -- # return 0 00:13:33.071 17:17:42 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:13:33.071 17:17:42 -- target/device_removal.sh@53 -- # return 0 00:13:33.071 17:17:42 -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:13:33.071 17:17:42 -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:13:33.071 17:17:42 -- target/device_removal.sh@87 -- # local dev_names 00:13:33.071 17:17:42 -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:33.071 17:17:42 -- target/device_removal.sh@91 -- # bdevperf_pid=3018232 00:13:33.071 17:17:42 -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:33.071 17:17:42 -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:13:33.071 17:17:42 -- target/device_removal.sh@94 -- # waitforlisten 3018232 /var/tmp/bdevperf.sock 00:13:33.071 17:17:42 -- common/autotest_common.sh@817 -- # '[' -z 3018232 ']' 00:13:33.071 17:17:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:33.071 17:17:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:33.071 17:17:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:33.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:33.072 17:17:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:33.072 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:13:34.002 17:17:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:34.002 17:17:42 -- common/autotest_common.sh@850 -- # return 0 00:13:34.002 17:17:42 -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:13:34.002 17:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.002 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:13:34.002 17:17:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.002 17:17:42 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:13:34.002 17:17:42 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:13:34.002 17:17:42 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:13:34.002 17:17:42 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:13:34.002 17:17:42 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:13:34.002 17:17:42 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:34.002 17:17:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:34.002 17:17:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:34.002 17:17:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:34.002 17:17:42 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:13:34.002 17:17:42 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:13:34.002 17:17:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.002 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:13:34.002 Nvme_mlx_0_0n1 00:13:34.002 17:17:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.002 17:17:43 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:13:34.002 17:17:43 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:13:34.002 17:17:43 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:13:34.002 17:17:43 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:13:34.002 17:17:43 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:13:34.002 17:17:43 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:34.002 17:17:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:34.002 17:17:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:34.002 17:17:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:34.002 17:17:43 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:13:34.002 17:17:43 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:13:34.002 17:17:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.002 17:17:43 -- common/autotest_common.sh@10 -- # set +x 00:13:34.002 Nvme_mlx_0_1n1 00:13:34.002 17:17:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.002 17:17:43 -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=3018263 00:13:34.002 17:17:43 -- target/device_removal.sh@112 -- # sleep 5 00:13:34.002 17:17:43 -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:13:39.266 17:17:48 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:13:39.266 17:17:48 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:13:39.266 17:17:48 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:13:39.266 17:17:48 -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:13:39.267 17:17:48 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:13:39.267 17:17:48 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:13:39.267 17:17:48 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:13:39.267 17:17:48 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/infiniband 00:13:39.267 17:17:48 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:13:39.267 17:17:48 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:13:39.267 17:17:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:39.267 17:17:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:39.267 17:17:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:39.267 17:17:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:39.267 17:17:48 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:13:39.267 17:17:48 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:13:39.267 17:17:48 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:13:39.267 17:17:48 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:13:39.267 17:17:48 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0 00:13:39.267 17:17:48 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:13:39.267 17:17:48 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:13:39.267 17:17:48 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:13:39.267 17:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.267 17:17:48 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:13:39.267 17:17:48 -- common/autotest_common.sh@10 -- # set +x 00:13:39.267 17:17:48 -- target/device_removal.sh@77 -- # grep mlx5_0 00:13:39.267 17:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.267 mlx5_0 00:13:39.267 17:17:48 -- target/device_removal.sh@78 -- # return 0 00:13:39.267 17:17:48 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:13:39.267 17:17:48 -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:13:39.267 17:17:48 -- target/device_removal.sh@67 -- # echo 1 00:13:39.267 17:17:48 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:13:39.267 17:17:48 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:13:39.267 17:17:48 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:13:39.267 [2024-04-24 17:17:48.288459] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:13:39.267 [2024-04-24 17:17:48.289164] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:13:39.267 [2024-04-24 17:17:48.293297] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:13:39.267 [2024-04-24 17:17:48.293317] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 94 00:13:39.267 [2024-04-24 17:17:48.293323] rdma.c: 703:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:13:39.267 [2024-04-24 17:17:48.293329] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293334] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293339] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293345] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293350] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:39.267 [2024-04-24 17:17:48.293355] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293360] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293364] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293373] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293378] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293383] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293388] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293393] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293397] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293402] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293407] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293411] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293416] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293423] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293427] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.267 [2024-04-24 17:17:48.293432] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293436] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293441] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:39.267 [2024-04-24 17:17:48.293446] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293451] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293455] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293460] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293464] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293469] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293474] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293478] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293483] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293487] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293492] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293497] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:39.267 [2024-04-24 17:17:48.293502] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293506] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:39.267 [2024-04-24 17:17:48.293510] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293515] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293520] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293524] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293529] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293533] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293538] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293542] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293547] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293551] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293557] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293562] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293569] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293583] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293588] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.267 [2024-04-24 17:17:48.293592] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293597] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293601] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293606] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.267 [2024-04-24 17:17:48.293611] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:39.267 [2024-04-24 17:17:48.293615] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293620] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293624] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293629] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293634] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.267 [2024-04-24 17:17:48.293638] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293643] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.267 [2024-04-24 17:17:48.293647] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293653] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293658] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293662] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293667] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293672] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.267 [2024-04-24 17:17:48.293677] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293682] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.267 [2024-04-24 17:17:48.293686] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293691] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293695] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293700] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293705] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293709] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293714] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293719] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293724] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293729] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293735] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293741] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293746] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.267 [2024-04-24 17:17:48.293751] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.267 [2024-04-24 17:17:48.293755] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293760] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293764] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293769] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293774] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:39.268 [2024-04-24 17:17:48.293778] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293788] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293792] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293797] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293802] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.293807] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293812] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293817] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293822] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.293840] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293845] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.293849] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293854] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.293858] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293863] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293868] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293873] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293877] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293882] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293887] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293892] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293897] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293901] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293907] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293912] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293916] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293921] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293927] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293932] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.293936] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293941] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293946] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293950] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293955] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293959] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293964] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293969] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.293973] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293978] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.293983] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293990] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.293994] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.293999] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294005] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294010] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294014] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294019] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.294024] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:13:39.268 [2024-04-24 17:17:48.294029] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294034] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294038] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294042] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294047] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294051] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294056] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294063] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294068] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294073] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294078] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294083] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294087] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294092] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294096] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.294101] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294105] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.294110] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294114] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.294119] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294124] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294130] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294134] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294139] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294144] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294148] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294153] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.294157] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294162] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.294166] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294170] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294175] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294180] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.294184] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294189] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294193] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294199] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294204] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294209] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.294216] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294221] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.294226] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294231] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294235] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294240] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:39.268 [2024-04-24 17:17:48.294245] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294249] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:13:39.268 [2024-04-24 17:17:48.294254] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:13:39.268 [2024-04-24 17:17:48.294258] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:13:45.824 17:17:54 -- target/device_removal.sh@147 -- # seq 1 10 00:13:45.824 17:17:54 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:13:45.824 17:17:54 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:13:45.824 17:17:54 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:13:45.824 17:17:54 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:13:45.824 17:17:54 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:13:45.824 17:17:54 -- target/device_removal.sh@77 -- # grep mlx5_0 00:13:45.824 17:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.824 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:13:45.824 17:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.824 17:17:54 -- target/device_removal.sh@78 -- # return 1 00:13:45.824 17:17:54 -- target/device_removal.sh@149 -- # break 00:13:45.824 17:17:54 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:13:45.824 17:17:54 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:13:45.824 17:17:54 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:13:45.824 17:17:54 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:13:45.824 17:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.824 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:13:45.824 17:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.824 17:17:54 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:13:45.824 17:17:54 -- target/device_removal.sh@160 -- # rescan_pci 00:13:45.824 17:17:54 -- target/device_removal.sh@57 -- # echo 1 00:13:46.757 [2024-04-24 17:17:55.769544] rdma.c:3314:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x20e1890, err 11. Skip rescan. 00:13:46.758 17:17:55 -- target/device_removal.sh@162 -- # seq 1 10 00:13:46.758 17:17:55 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:13:46.758 17:17:55 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/net 00:13:46.758 17:17:55 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:13:46.758 17:17:55 -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:13:46.758 17:17:55 -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:13:46.758 17:17:55 -- target/device_removal.sh@171 -- # break 00:13:46.758 17:17:55 -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:13:46.758 17:17:55 -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:13:46.758 17:17:55 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:13:46.758 17:17:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:46.758 17:17:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:46.758 17:17:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:46.758 17:17:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:46.758 17:17:55 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:13:46.758 17:17:55 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:13:46.758 17:17:55 -- target/device_removal.sh@186 -- # seq 1 10 00:13:46.758 17:17:55 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:13:46.758 17:17:55 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:13:46.758 17:17:55 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:13:46.758 17:17:55 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:13:46.758 17:17:55 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:13:46.758 17:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.758 17:17:55 -- common/autotest_common.sh@10 -- # set +x 00:13:47.016 [2024-04-24 17:17:56.121511] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ec58b0/0x1ec9da0) succeed. 00:13:47.016 [2024-04-24 17:17:56.125511] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:47.016 [2024-04-24 17:17:56.125528] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:13:47.016 17:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.016 17:17:56 -- target/device_removal.sh@187 -- # ib_count=2 00:13:47.016 17:17:56 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:13:47.016 17:17:56 -- target/device_removal.sh@189 -- # break 00:13:47.016 17:17:56 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:13:47.016 17:17:56 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:13:47.016 17:17:56 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:13:47.016 17:17:56 -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:13:47.016 17:17:56 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:13:47.016 17:17:56 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:13:47.016 17:17:56 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:13:47.016 17:17:56 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1/infiniband 00:13:47.016 17:17:56 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:13:47.016 17:17:56 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:13:47.016 17:17:56 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:47.016 17:17:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:47.016 17:17:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:47.016 17:17:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:47.016 17:17:56 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:13:47.016 17:17:56 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:13:47.016 17:17:56 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:13:47.016 17:17:56 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:13:47.016 17:17:56 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1 00:13:47.016 17:17:56 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:13:47.016 17:17:56 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:13:47.016 17:17:56 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:13:47.016 17:17:56 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:13:47.016 17:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.016 17:17:56 -- common/autotest_common.sh@10 -- # set +x 00:13:47.016 17:17:56 -- target/device_removal.sh@77 -- # grep mlx5_1 00:13:47.016 17:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.016 mlx5_1 00:13:47.016 17:17:56 -- target/device_removal.sh@78 -- # return 0 00:13:47.016 17:17:56 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:13:47.016 17:17:56 -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:13:47.016 17:17:56 -- target/device_removal.sh@67 -- # echo 1 00:13:47.016 17:17:56 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:13:47.016 17:17:56 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:13:47.016 17:17:56 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:13:47.274 [2024-04-24 17:17:56.284763] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:13:47.274 [2024-04-24 17:17:56.284823] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:13:47.274 [2024-04-24 17:17:56.319322] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:13:53.833 17:18:02 -- target/device_removal.sh@147 -- # seq 1 10 00:13:53.833 17:18:02 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:13:53.833 17:18:02 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:13:53.833 17:18:02 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:13:53.833 17:18:02 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:13:53.833 17:18:02 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:13:53.833 17:18:02 -- target/device_removal.sh@77 -- # grep mlx5_1 00:13:53.833 17:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.833 17:18:02 -- common/autotest_common.sh@10 -- # set +x 00:13:53.833 17:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:53.833 17:18:02 -- target/device_removal.sh@78 -- # return 1 00:13:53.833 17:18:02 -- target/device_removal.sh@149 -- # break 00:13:53.833 17:18:02 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:13:53.833 17:18:02 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:13:53.833 17:18:02 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:13:53.833 17:18:02 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:13:53.833 17:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.833 17:18:02 -- common/autotest_common.sh@10 -- # set +x 00:13:53.833 17:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:53.833 17:18:02 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:13:53.833 17:18:02 -- target/device_removal.sh@160 -- # rescan_pci 00:13:53.833 17:18:02 -- target/device_removal.sh@57 -- # echo 1 00:13:55.204 [2024-04-24 17:18:04.390226] rdma.c:3314:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x20d1680, err 11. Skip rescan. 00:13:55.204 17:18:04 -- target/device_removal.sh@162 -- # seq 1 10 00:13:55.204 17:18:04 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:13:55.461 17:18:04 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1/net 00:13:55.461 17:18:04 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:13:55.461 17:18:04 -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:13:55.461 17:18:04 -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:13:55.461 17:18:04 -- target/device_removal.sh@171 -- # break 00:13:55.461 17:18:04 -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:13:55.461 17:18:04 -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:13:55.461 17:18:04 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:13:55.461 17:18:04 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:55.461 17:18:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:55.461 17:18:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:55.461 17:18:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:55.461 17:18:04 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:13:55.461 17:18:04 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:13:55.461 17:18:04 -- target/device_removal.sh@186 -- # seq 1 10 00:13:55.461 17:18:04 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:13:55.461 17:18:04 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:13:55.461 17:18:04 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:13:55.461 17:18:04 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:13:55.461 17:18:04 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:13:55.461 17:18:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.461 17:18:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.720 [2024-04-24 17:18:04.713393] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20cf2c0/0x1f0b430) succeed. 00:13:55.720 [2024-04-24 17:18:04.717311] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:13:55.720 [2024-04-24 17:18:04.717329] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:13:55.720 17:18:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.720 17:18:04 -- target/device_removal.sh@187 -- # ib_count=2 00:13:55.720 17:18:04 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:13:55.720 17:18:04 -- target/device_removal.sh@189 -- # break 00:13:55.720 17:18:04 -- target/device_removal.sh@200 -- # stop_bdevperf 00:13:55.720 17:18:04 -- target/device_removal.sh@116 -- # wait 3018263 00:15:17.255 0 00:15:17.255 17:19:13 -- target/device_removal.sh@118 -- # killprocess 3018232 00:15:17.255 17:19:13 -- common/autotest_common.sh@936 -- # '[' -z 3018232 ']' 00:15:17.255 17:19:13 -- common/autotest_common.sh@940 -- # kill -0 3018232 00:15:17.255 17:19:13 -- common/autotest_common.sh@941 -- # uname 00:15:17.255 17:19:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:17.255 17:19:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3018232 00:15:17.255 17:19:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:17.255 17:19:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:17.255 17:19:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3018232' 00:15:17.255 killing process with pid 3018232 00:15:17.255 17:19:13 -- common/autotest_common.sh@955 -- # kill 3018232 00:15:17.255 17:19:13 -- common/autotest_common.sh@960 -- # wait 3018232 00:15:17.255 17:19:13 -- target/device_removal.sh@119 -- # bdevperf_pid= 00:15:17.255 17:19:13 -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:15:17.255 [2024-04-24 17:17:42.113582] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:15:17.255 [2024-04-24 17:17:42.113629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3018232 ] 00:15:17.255 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.255 [2024-04-24 17:17:42.161966] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.255 [2024-04-24 17:17:42.237495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.255 Running I/O for 90 seconds... 00:15:17.255 [2024-04-24 17:17:48.294380] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:17.255 [2024-04-24 17:17:48.294411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.255 [2024-04-24 17:17:48.294421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32722 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:15:17.255 [2024-04-24 17:17:48.294429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.255 [2024-04-24 17:17:48.294435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32722 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:15:17.255 [2024-04-24 17:17:48.294442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.255 [2024-04-24 17:17:48.294449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32722 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:15:17.255 [2024-04-24 17:17:48.294456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.255 [2024-04-24 17:17:48.294462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32722 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:15:17.255 [2024-04-24 17:17:48.297025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:17.255 [2024-04-24 17:17:48.297036] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:17.255 [2024-04-24 17:17:48.297063] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:17.255 [2024-04-24 17:17:48.304380] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.314614] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.324932] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.335068] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.345375] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.355402] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.365659] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.375778] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.385877] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.395969] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.406016] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.416079] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.426185] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.436211] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.446527] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.456552] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.466580] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.476665] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.486762] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.496791] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.506815] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.516841] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.526883] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.536907] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.547206] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.557233] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.567324] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.577351] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.255 [2024-04-24 17:17:48.587437] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.597462] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.607487] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.617730] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.627757] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.637785] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.647824] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.657851] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.668064] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.678090] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.688117] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.698145] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.708170] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.718195] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.728220] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.738247] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.748272] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.758300] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.768327] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.778355] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.788381] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.798408] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.808434] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.818460] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.828484] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.838509] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.848536] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.858563] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.868591] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.878642] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.888669] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.898695] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.908806] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.918881] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.928909] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.938936] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.949286] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.959423] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.969752] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.979772] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.989797] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:48.999828] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.009976] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.020095] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.030279] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.040307] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.050501] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.060541] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.070567] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.080595] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.090620] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.100962] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.110990] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.121028] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.131056] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.141082] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.151107] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.161146] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.171174] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.181199] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.191301] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.201327] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.211353] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.221528] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.231554] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.241579] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.251606] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.261631] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.271709] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.281742] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.291770] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.256 [2024-04-24 17:17:49.299530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:198856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.256 [2024-04-24 17:17:49.299548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.256 [2024-04-24 17:17:49.299567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:198864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.256 [2024-04-24 17:17:49.299574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.256 [2024-04-24 17:17:49.299582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:198872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.256 [2024-04-24 17:17:49.299589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.256 [2024-04-24 17:17:49.299597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:198880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.256 [2024-04-24 17:17:49.299602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.256 [2024-04-24 17:17:49.299610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:198888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.256 [2024-04-24 17:17:49.299616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.256 [2024-04-24 17:17:49.299624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:198896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.256 [2024-04-24 17:17:49.299630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.256 [2024-04-24 17:17:49.299638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:198904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.256 [2024-04-24 17:17:49.299644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.256 [2024-04-24 17:17:49.299652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:198912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.256 [2024-04-24 17:17:49.299658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:198920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:198928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:198936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:198944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:198952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:198960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:198968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:198976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:198984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:198992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:199000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:199008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:199016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:199024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:199032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:199040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:199048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:199056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:199064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:199072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:199080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:199088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:199096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.299988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.299996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:199104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:199112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:199120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:199128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:199136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:199144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:199152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:199160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:199168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:199176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:199184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:199192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:199200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:199208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:199216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.257 [2024-04-24 17:17:49.300197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.257 [2024-04-24 17:17:49.300205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:199224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:199232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:199240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:199248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:199256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:199264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:199272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:199280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:199288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:199296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:199304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:199312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:199320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:199328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:199336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:199344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:199352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:199360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:199368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:199376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:199384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:199392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:199400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:199408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:199416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:199424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:199432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:199440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:199448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:199456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:199464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:199472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:199480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:199488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:199496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:199504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:199512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:199520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:199528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.258 [2024-04-24 17:17:49.300748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:199536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.258 [2024-04-24 17:17:49.300754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:199544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:199552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:199560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:199568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:199576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:199584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:199592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:199600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:199608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:199616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:199624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:199632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:199640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:199648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:199656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:199664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.300989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:199672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.259 [2024-04-24 17:17:49.300995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.301003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:198656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007700000 len:0x1000 key:0x180100 00:15:17.259 [2024-04-24 17:17:49.301009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.301017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:198664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x180100 00:15:17.259 [2024-04-24 17:17:49.301024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.301031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:198672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x180100 00:15:17.259 [2024-04-24 17:17:49.301038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.301046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:198680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x180100 00:15:17.259 [2024-04-24 17:17:49.301052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.301060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:198688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x180100 00:15:17.259 [2024-04-24 17:17:49.301066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.301074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:198696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x180100 00:15:17.259 [2024-04-24 17:17:49.301080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.301088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:198704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x180100 00:15:17.259 [2024-04-24 17:17:49.301094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.301103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:198712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x180100 00:15:17.259 [2024-04-24 17:17:49.301109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.301117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:198720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x180100 00:15:17.259 [2024-04-24 17:17:49.301124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.301132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:198728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x180100 00:15:17.259 [2024-04-24 17:17:49.301138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.301146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:198736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x180100 00:15:17.259 [2024-04-24 17:17:49.301152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.301160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:198744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x180100 00:15:17.259 [2024-04-24 17:17:49.301166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.301174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:198752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x180100 00:15:17.259 [2024-04-24 17:17:49.301181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.259 [2024-04-24 17:17:49.301189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:198760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x180100 00:15:17.260 [2024-04-24 17:17:49.301195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:49.301203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:198768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x180100 00:15:17.260 [2024-04-24 17:17:49.301209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:49.301217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:198776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x180100 00:15:17.260 [2024-04-24 17:17:49.301223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:49.301231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:198784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x180100 00:15:17.260 [2024-04-24 17:17:49.301237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:49.301245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:198792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x180100 00:15:17.260 [2024-04-24 17:17:49.301251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:49.301260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:198800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x180100 00:15:17.260 [2024-04-24 17:17:49.301266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:49.301274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:198808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x180100 00:15:17.260 [2024-04-24 17:17:49.301280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:49.301288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:198816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x180100 00:15:17.260 [2024-04-24 17:17:49.301294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:49.301306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:198824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x180100 00:15:17.260 [2024-04-24 17:17:49.301312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:49.301320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:198832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x180100 00:15:17.260 [2024-04-24 17:17:49.301326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:49.301334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:198840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x180100 00:15:17.260 [2024-04-24 17:17:49.301341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:49.314092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:17.260 [2024-04-24 17:17:49.314104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:17.260 [2024-04-24 17:17:49.314110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:198848 len:8 PRP1 0x0 PRP2 0x0 00:15:17.260 [2024-04-24 17:17:49.314117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:49.315837] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:17.260 [2024-04-24 17:17:49.316107] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:15:17.260 [2024-04-24 17:17:49.316119] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:17.260 [2024-04-24 17:17:49.316124] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:17.260 [2024-04-24 17:17:49.316139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:17.260 [2024-04-24 17:17:49.316146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:17.260 [2024-04-24 17:17:49.316155] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:17.260 [2024-04-24 17:17:49.316162] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:17.260 [2024-04-24 17:17:49.316169] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:17.260 [2024-04-24 17:17:49.316187] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:17.260 [2024-04-24 17:17:49.316197] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:17.260 [2024-04-24 17:17:50.318712] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:15:17.260 [2024-04-24 17:17:50.318750] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:17.260 [2024-04-24 17:17:50.318757] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:17.260 [2024-04-24 17:17:50.318775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:17.260 [2024-04-24 17:17:50.318782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:17.260 [2024-04-24 17:17:50.318792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:17.260 [2024-04-24 17:17:50.318798] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:17.260 [2024-04-24 17:17:50.318805] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:17.260 [2024-04-24 17:17:50.318828] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:17.260 [2024-04-24 17:17:50.318835] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:17.260 [2024-04-24 17:17:51.321987] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:15:17.260 [2024-04-24 17:17:51.322028] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:17.260 [2024-04-24 17:17:51.322035] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:17.260 [2024-04-24 17:17:51.322052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:17.260 [2024-04-24 17:17:51.322059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:17.260 [2024-04-24 17:17:51.322069] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:17.260 [2024-04-24 17:17:51.322076] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:17.260 [2024-04-24 17:17:51.322083] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:17.260 [2024-04-24 17:17:51.322102] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:17.260 [2024-04-24 17:17:51.322109] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:17.260 [2024-04-24 17:17:53.328766] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:17.260 [2024-04-24 17:17:53.328804] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:17.260 [2024-04-24 17:17:53.328830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:17.260 [2024-04-24 17:17:53.328838] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:17.260 [2024-04-24 17:17:53.328848] bdev_nvme.c:2871:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:15:17.260 [2024-04-24 17:17:53.329573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:17.260 [2024-04-24 17:17:53.329583] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:17.260 [2024-04-24 17:17:53.329590] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:17.260 [2024-04-24 17:17:53.329650] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:17.260 [2024-04-24 17:17:53.329682] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:17.260 [2024-04-24 17:17:54.333513] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:17.260 [2024-04-24 17:17:54.333544] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:17.260 [2024-04-24 17:17:54.333565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:17.260 [2024-04-24 17:17:54.333589] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:17.260 [2024-04-24 17:17:54.334026] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:17.260 [2024-04-24 17:17:54.334035] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:17.260 [2024-04-24 17:17:54.334042] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:17.260 [2024-04-24 17:17:54.334078] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:17.260 [2024-04-24 17:17:54.334085] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:17.260 [2024-04-24 17:17:56.311106] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:15:17.260 [2024-04-24 17:17:56.311138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.260 [2024-04-24 17:17:56.311148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32722 cdw0:6 sqhd:a3b9 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:56.311156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.260 [2024-04-24 17:17:56.311163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32722 cdw0:6 sqhd:a3b9 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:56.311169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.260 [2024-04-24 17:17:56.311176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32722 cdw0:6 sqhd:a3b9 p:0 m:0 dnr:0 00:15:17.260 [2024-04-24 17:17:56.311182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.260 [2024-04-24 17:17:56.311188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32722 cdw0:6 sqhd:a3b9 p:0 m:0 dnr:0 00:15:17.261 [2024-04-24 17:17:56.402973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:17.261 [2024-04-24 17:17:56.402994] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:17.261 [2024-04-24 17:17:56.403012] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:17.261 [2024-04-24 17:17:56.403036] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:17.261 [2024-04-24 17:17:56.403050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:17.261 [2024-04-24 17:17:56.403056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:17.261 [2024-04-24 17:17:56.403063] bdev_nvme.c:2871:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:15:17.261 [2024-04-24 17:17:56.403242] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:17.261 [2024-04-24 17:17:56.403318] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.403329] bdev_nvme.c:2871:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:15:17.261 [2024-04-24 17:17:56.403371] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:17.261 [2024-04-24 17:17:56.403379] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:17.261 [2024-04-24 17:17:56.403386] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:17.261 [2024-04-24 17:17:56.403409] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:17.261 [2024-04-24 17:17:56.403442] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:17.261 [2024-04-24 17:17:56.428503] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.438512] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.448541] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.458569] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.468597] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.478622] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.546289] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.546361] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.564791] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.574816] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.584844] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.594869] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.604894] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.621317] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.631340] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.639815] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:17.261 [2024-04-24 17:17:56.641364] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.651391] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.661418] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.671443] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.681470] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.691496] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.701522] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.711550] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.721578] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.731604] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.741630] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.751655] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.761681] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.771707] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.781736] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.791761] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.801787] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.811813] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.821839] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.831864] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.841891] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.851917] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.861943] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.871967] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.881993] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.892018] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.902045] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.912072] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.922098] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.932124] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.942148] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.952176] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.962201] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.972228] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.982254] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:56.992280] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.002306] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.012332] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.022358] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.032383] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.042409] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.052435] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.062460] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.072485] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.082512] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.092537] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.102563] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.112588] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.122615] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.132641] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.142667] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.152692] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.162717] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.172744] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.182771] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.192796] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.202823] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.212850] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.222877] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.232903] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.261 [2024-04-24 17:17:57.242930] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.252955] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.262980] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.273008] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.283034] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.293060] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.303091] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.313116] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.323142] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.333168] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.343194] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.353219] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.363246] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.373271] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.383296] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.393322] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.403350] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:17.262 [2024-04-24 17:17:57.406258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.406841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:17.262 [2024-04-24 17:17:57.406847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32722 cdw0:3a05c840 sqhd:7530 p:0 m:0 dnr:0 00:15:17.262 [2024-04-24 17:17:57.419431] rdma_verbs.c: 83:spdk_rdma_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:15:17.263 [2024-04-24 17:17:57.419492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:17.263 [2024-04-24 17:17:57.419500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:17.263 [2024-04-24 17:17:57.419506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115688 len:8 PRP1 0x0 PRP2 0x0 00:15:17.263 [2024-04-24 17:17:57.419513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.263 [2024-04-24 17:17:57.419524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:17.263 [2024-04-24 17:17:57.419530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:17.263 [2024-04-24 17:17:57.419535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115696 len:8 PRP1 0x0 PRP2 0x0 00:15:17.263 [2024-04-24 17:17:57.419541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.263 [2024-04-24 17:17:57.419637] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:17.263 [2024-04-24 17:17:57.419728] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:15:17.263 [2024-04-24 17:17:57.419737] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:17.263 [2024-04-24 17:17:57.419743] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:17.263 [2024-04-24 17:17:57.419757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:17.263 [2024-04-24 17:17:57.419764] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:17.263 [2024-04-24 17:17:57.419790] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:17.263 [2024-04-24 17:17:57.419796] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:17.263 [2024-04-24 17:17:57.419803] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:17.263 [2024-04-24 17:17:57.419817] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:17.263 [2024-04-24 17:17:57.419823] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:17.263 [2024-04-24 17:17:58.422390] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:15:17.263 [2024-04-24 17:17:58.422435] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:17.263 [2024-04-24 17:17:58.422443] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:17.263 [2024-04-24 17:17:58.422462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:17.263 [2024-04-24 17:17:58.422469] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:17.263 [2024-04-24 17:17:58.422480] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:17.263 [2024-04-24 17:17:58.422486] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:17.263 [2024-04-24 17:17:58.422492] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:17.263 [2024-04-24 17:17:58.422513] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:17.263 [2024-04-24 17:17:58.422521] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:17.263 [2024-04-24 17:17:59.426593] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:15:17.263 [2024-04-24 17:17:59.426637] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:17.263 [2024-04-24 17:17:59.426643] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:17.263 [2024-04-24 17:17:59.426676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:17.263 [2024-04-24 17:17:59.426684] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:17.263 [2024-04-24 17:17:59.426703] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:17.263 [2024-04-24 17:17:59.426709] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:17.263 [2024-04-24 17:17:59.426717] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:17.263 [2024-04-24 17:17:59.426736] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:17.263 [2024-04-24 17:17:59.426743] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:17.263 [2024-04-24 17:18:01.432298] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:17.263 [2024-04-24 17:18:01.432351] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:17.263 [2024-04-24 17:18:01.432392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:17.263 [2024-04-24 17:18:01.432401] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:17.263 [2024-04-24 17:18:01.432415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:17.263 [2024-04-24 17:18:01.432421] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:17.263 [2024-04-24 17:18:01.432429] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:17.263 [2024-04-24 17:18:01.432458] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:17.263 [2024-04-24 17:18:01.432465] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:17.263 [2024-04-24 17:18:03.439246] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:17.263 [2024-04-24 17:18:03.439282] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:17.263 [2024-04-24 17:18:03.439304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:17.263 [2024-04-24 17:18:03.439312] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:17.263 [2024-04-24 17:18:03.441011] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:17.263 [2024-04-24 17:18:03.441029] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:17.263 [2024-04-24 17:18:03.441037] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:17.263 [2024-04-24 17:18:03.441059] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:17.263 [2024-04-24 17:18:03.441066] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:17.263 [2024-04-24 17:18:05.446031] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:17.263 [2024-04-24 17:18:05.446071] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:17.263 [2024-04-24 17:18:05.446100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:17.263 [2024-04-24 17:18:05.446108] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:17.263 [2024-04-24 17:18:05.446119] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:17.263 [2024-04-24 17:18:05.446125] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:17.263 [2024-04-24 17:18:05.446132] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:17.263 [2024-04-24 17:18:05.446152] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:17.263 [2024-04-24 17:18:05.446159] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:17.263 [2024-04-24 17:18:06.690538] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:17.263 00:15:17.263 Latency(us) 00:15:17.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.263 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:17.263 Verification LBA range: start 0x0 length 0x8000 00:15:17.263 Nvme_mlx_0_0n1 : 90.01 11215.10 43.81 0.00 0.00 11391.78 1997.29 9139588.14 00:15:17.263 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:17.263 Verification LBA range: start 0x0 length 0x8000 00:15:17.263 Nvme_mlx_0_1n1 : 90.01 10080.10 39.38 0.00 0.00 12680.37 866.01 11184810.67 00:15:17.263 =================================================================================================================== 00:15:17.263 Total : 21295.20 83.18 0.00 0.00 12001.74 866.01 11184810.67 00:15:17.263 Received shutdown signal, test time was about 90.000000 seconds 00:15:17.263 00:15:17.263 Latency(us) 00:15:17.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.263 =================================================================================================================== 00:15:17.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:17.263 17:19:13 -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:15:17.263 17:19:13 -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:15:17.263 17:19:13 -- target/device_removal.sh@202 -- # killprocess 3018181 00:15:17.263 17:19:13 -- common/autotest_common.sh@936 -- # '[' -z 3018181 ']' 00:15:17.263 17:19:13 -- common/autotest_common.sh@940 -- # kill -0 3018181 00:15:17.263 17:19:13 -- common/autotest_common.sh@941 -- # uname 00:15:17.263 17:19:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:17.263 17:19:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3018181 00:15:17.263 17:19:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:17.263 17:19:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:17.263 17:19:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3018181' 00:15:17.263 killing process with pid 3018181 00:15:17.263 17:19:13 -- common/autotest_common.sh@955 -- # kill 3018181 00:15:17.263 17:19:13 -- common/autotest_common.sh@960 -- # wait 3018181 00:15:17.263 17:19:14 -- target/device_removal.sh@203 -- # nvmfpid= 00:15:17.263 17:19:14 -- target/device_removal.sh@205 -- # return 0 00:15:17.263 00:15:17.263 real 1m33.193s 00:15:17.263 user 4m29.283s 00:15:17.263 sys 0m3.698s 00:15:17.263 17:19:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:17.263 17:19:14 -- common/autotest_common.sh@10 -- # set +x 00:15:17.263 ************************************ 00:15:17.264 END TEST nvmf_device_removal_pci_remove_no_srq 00:15:17.264 ************************************ 00:15:17.264 17:19:14 -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:15:17.264 17:19:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:17.264 17:19:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:17.264 17:19:14 -- common/autotest_common.sh@10 -- # set +x 00:15:17.264 ************************************ 00:15:17.264 START TEST nvmf_device_removal_pci_remove 00:15:17.264 ************************************ 00:15:17.264 17:19:14 -- common/autotest_common.sh@1111 -- # test_remove_and_rescan 00:15:17.264 17:19:14 -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:15:17.264 17:19:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:17.264 17:19:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:17.264 17:19:14 -- common/autotest_common.sh@10 -- # set +x 00:15:17.264 17:19:14 -- nvmf/common.sh@470 -- # nvmfpid=3020369 00:15:17.264 17:19:14 -- nvmf/common.sh@471 -- # waitforlisten 3020369 00:15:17.264 17:19:14 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:17.264 17:19:14 -- common/autotest_common.sh@817 -- # '[' -z 3020369 ']' 00:15:17.264 17:19:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.264 17:19:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:17.264 17:19:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.264 17:19:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:17.264 17:19:14 -- common/autotest_common.sh@10 -- # set +x 00:15:17.264 [2024-04-24 17:19:14.306981] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:15:17.264 [2024-04-24 17:19:14.307024] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.264 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.264 [2024-04-24 17:19:14.363812] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:17.264 [2024-04-24 17:19:14.440785] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.264 [2024-04-24 17:19:14.440821] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.264 [2024-04-24 17:19:14.440834] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.264 [2024-04-24 17:19:14.440841] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.264 [2024-04-24 17:19:14.440862] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.264 [2024-04-24 17:19:14.440907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.264 [2024-04-24 17:19:14.440910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.264 17:19:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:17.264 17:19:15 -- common/autotest_common.sh@850 -- # return 0 00:15:17.264 17:19:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:17.264 17:19:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:17.264 17:19:15 -- common/autotest_common.sh@10 -- # set +x 00:15:17.264 17:19:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.264 17:19:15 -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:15:17.264 17:19:15 -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:15:17.264 17:19:15 -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:15:17.264 17:19:15 -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:17.264 17:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.264 17:19:15 -- common/autotest_common.sh@10 -- # set +x 00:15:17.264 [2024-04-24 17:19:15.161927] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19d38b0/0x19d7da0) succeed. 00:15:17.264 [2024-04-24 17:19:15.170883] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19d4db0/0x1a19430) succeed. 00:15:17.264 17:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.264 17:19:15 -- target/device_removal.sh@49 -- # get_rdma_if_list 00:15:17.264 17:19:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.264 17:19:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:17.264 17:19:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:17.264 17:19:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.264 17:19:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:17.264 17:19:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:17.264 17:19:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.264 17:19:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.264 17:19:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:17.264 17:19:15 -- nvmf/common.sh@105 -- # continue 2 00:15:17.264 17:19:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:17.264 17:19:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.264 17:19:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.264 17:19:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.264 17:19:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.264 17:19:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:17.264 17:19:15 -- nvmf/common.sh@105 -- # continue 2 00:15:17.264 17:19:15 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:15:17.264 17:19:15 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:15:17.264 17:19:15 -- target/device_removal.sh@25 -- # local -a dev_name 00:15:17.264 17:19:15 -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:15:17.264 17:19:15 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:15:17.264 17:19:15 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:15:17.264 17:19:15 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:15:17.264 17:19:15 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:15:17.264 17:19:15 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:15:17.264 17:19:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:17.264 17:19:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:17.264 17:19:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.264 17:19:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.264 17:19:15 -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:15:17.264 17:19:15 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:15:17.264 17:19:15 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:15:17.264 17:19:15 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:15:17.264 17:19:15 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:15:17.264 17:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.264 17:19:15 -- common/autotest_common.sh@10 -- # set +x 00:15:17.264 17:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.264 17:19:15 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:15:17.264 17:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.264 17:19:15 -- common/autotest_common.sh@10 -- # set +x 00:15:17.264 17:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.264 17:19:15 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:15:17.264 17:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.264 17:19:15 -- common/autotest_common.sh@10 -- # set +x 00:15:17.264 17:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.264 17:19:15 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:15:17.264 17:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.264 17:19:15 -- common/autotest_common.sh@10 -- # set +x 00:15:17.264 [2024-04-24 17:19:15.351489] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:17.264 17:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.264 17:19:15 -- target/device_removal.sh@41 -- # return 0 00:15:17.264 17:19:15 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:15:17.264 17:19:15 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:15:17.264 17:19:15 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:15:17.264 17:19:15 -- target/device_removal.sh@25 -- # local -a dev_name 00:15:17.264 17:19:15 -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:15:17.264 17:19:15 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:15:17.264 17:19:15 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:15:17.264 17:19:15 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:15:17.264 17:19:15 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:15:17.264 17:19:15 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:15:17.264 17:19:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:17.264 17:19:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:17.264 17:19:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.264 17:19:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.264 17:19:15 -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:15:17.264 17:19:15 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:15:17.264 17:19:15 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:15:17.264 17:19:15 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:15:17.265 17:19:15 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:15:17.265 17:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.265 17:19:15 -- common/autotest_common.sh@10 -- # set +x 00:15:17.265 17:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.265 17:19:15 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:15:17.265 17:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.265 17:19:15 -- common/autotest_common.sh@10 -- # set +x 00:15:17.265 17:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.265 17:19:15 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:15:17.265 17:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.265 17:19:15 -- common/autotest_common.sh@10 -- # set +x 00:15:17.265 17:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.265 17:19:15 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:15:17.265 17:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.265 17:19:15 -- common/autotest_common.sh@10 -- # set +x 00:15:17.265 [2024-04-24 17:19:15.430243] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:15:17.265 17:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.265 17:19:15 -- target/device_removal.sh@41 -- # return 0 00:15:17.265 17:19:15 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:15:17.265 17:19:15 -- target/device_removal.sh@53 -- # return 0 00:15:17.265 17:19:15 -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:15:17.265 17:19:15 -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:15:17.265 17:19:15 -- target/device_removal.sh@87 -- # local dev_names 00:15:17.265 17:19:15 -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:17.265 17:19:15 -- target/device_removal.sh@91 -- # bdevperf_pid=3020425 00:15:17.265 17:19:15 -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:17.265 17:19:15 -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:17.265 17:19:15 -- target/device_removal.sh@94 -- # waitforlisten 3020425 /var/tmp/bdevperf.sock 00:15:17.265 17:19:15 -- common/autotest_common.sh@817 -- # '[' -z 3020425 ']' 00:15:17.265 17:19:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:17.265 17:19:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:17.265 17:19:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:17.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:17.265 17:19:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:17.265 17:19:15 -- common/autotest_common.sh@10 -- # set +x 00:15:17.265 17:19:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:17.265 17:19:16 -- common/autotest_common.sh@850 -- # return 0 00:15:17.265 17:19:16 -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:17.265 17:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.265 17:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:17.265 17:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.265 17:19:16 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:15:17.265 17:19:16 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:15:17.265 17:19:16 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:15:17.265 17:19:16 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:15:17.265 17:19:16 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:15:17.265 17:19:16 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:17.265 17:19:16 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:17.265 17:19:16 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.265 17:19:16 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.265 17:19:16 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:15:17.265 17:19:16 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:15:17.265 17:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.265 17:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:17.265 Nvme_mlx_0_0n1 00:15:17.265 17:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.265 17:19:16 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:15:17.265 17:19:16 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:15:17.265 17:19:16 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:15:17.265 17:19:16 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:15:17.265 17:19:16 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:15:17.265 17:19:16 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:17.265 17:19:16 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:17.265 17:19:16 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.265 17:19:16 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.265 17:19:16 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:15:17.265 17:19:16 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:15:17.265 17:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.265 17:19:16 -- common/autotest_common.sh@10 -- # set +x 00:15:17.265 Nvme_mlx_0_1n1 00:15:17.265 17:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.265 17:19:16 -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=3020453 00:15:17.265 17:19:16 -- target/device_removal.sh@112 -- # sleep 5 00:15:17.265 17:19:16 -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:17.265 17:19:21 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:15:17.265 17:19:21 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:15:17.265 17:19:21 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:15:17.265 17:19:21 -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:15:17.265 17:19:21 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:15:17.265 17:19:21 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:15:17.265 17:19:21 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:15:17.265 17:19:21 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/infiniband 00:15:17.265 17:19:21 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:15:17.265 17:19:21 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:15:17.265 17:19:21 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:17.265 17:19:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:17.265 17:19:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.265 17:19:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.265 17:19:21 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:15:17.265 17:19:21 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:15:17.265 17:19:21 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:15:17.265 17:19:21 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:15:17.265 17:19:21 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0 00:15:17.265 17:19:21 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:15:17.265 17:19:21 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:15:17.265 17:19:21 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:15:17.265 17:19:21 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:15:17.265 17:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.265 17:19:21 -- common/autotest_common.sh@10 -- # set +x 00:15:17.265 17:19:21 -- target/device_removal.sh@77 -- # grep mlx5_0 00:15:17.265 17:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.265 mlx5_0 00:15:17.265 17:19:21 -- target/device_removal.sh@78 -- # return 0 00:15:17.265 17:19:21 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:15:17.265 17:19:21 -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:15:17.265 17:19:21 -- target/device_removal.sh@67 -- # echo 1 00:15:17.265 17:19:21 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:15:17.265 17:19:21 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:15:17.265 17:19:21 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.0/net/mlx_0_0/device 00:15:17.265 [2024-04-24 17:19:21.633546] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:15:17.265 [2024-04-24 17:19:21.633615] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:17.265 [2024-04-24 17:19:21.633666] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:18.198 17:19:27 -- target/device_removal.sh@147 -- # seq 1 10 00:15:18.198 17:19:27 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:15:18.198 17:19:27 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:15:18.198 17:19:27 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:15:18.198 17:19:27 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:15:18.198 17:19:27 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:15:18.198 17:19:27 -- target/device_removal.sh@77 -- # grep mlx5_0 00:15:18.198 17:19:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.198 17:19:27 -- common/autotest_common.sh@10 -- # set +x 00:15:18.198 17:19:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.198 17:19:27 -- target/device_removal.sh@78 -- # return 1 00:15:18.198 17:19:27 -- target/device_removal.sh@149 -- # break 00:15:18.198 17:19:27 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:15:18.198 17:19:27 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:15:18.198 17:19:27 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:15:18.198 17:19:27 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:15:18.198 17:19:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.198 17:19:27 -- common/autotest_common.sh@10 -- # set +x 00:15:18.198 17:19:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.198 17:19:27 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:15:18.198 17:19:27 -- target/device_removal.sh@160 -- # rescan_pci 00:15:18.198 17:19:27 -- target/device_removal.sh@57 -- # echo 1 00:15:19.129 [2024-04-24 17:19:28.234263] rdma.c:3314:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x1c559a0, err 11. Skip rescan. 00:15:19.129 17:19:28 -- target/device_removal.sh@162 -- # seq 1 10 00:15:19.129 17:19:28 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:15:19.129 17:19:28 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/net 00:15:19.129 17:19:28 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:15:19.129 17:19:28 -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:15:19.129 17:19:28 -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:15:19.129 17:19:28 -- target/device_removal.sh@171 -- # break 00:15:19.129 17:19:28 -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:15:19.129 17:19:28 -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:15:19.387 [2024-04-24 17:19:28.610643] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19d6540/0x19d7da0) succeed. 00:15:19.387 [2024-04-24 17:19:28.610694] rdma.c:3367:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:15:22.690 17:19:31 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:15:22.690 17:19:31 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:22.690 17:19:31 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:22.690 17:19:31 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:22.690 17:19:31 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:22.690 17:19:31 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:15:22.690 17:19:31 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:15:22.690 17:19:31 -- target/device_removal.sh@186 -- # seq 1 10 00:15:22.690 17:19:31 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:15:22.690 17:19:31 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:15:22.690 17:19:31 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:15:22.690 17:19:31 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:15:22.690 17:19:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.690 17:19:31 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:15:22.690 17:19:31 -- common/autotest_common.sh@10 -- # set +x 00:15:22.690 [2024-04-24 17:19:31.647693] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:22.690 [2024-04-24 17:19:31.647724] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:15:22.690 [2024-04-24 17:19:31.647736] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:15:22.690 [2024-04-24 17:19:31.647747] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:15:22.690 17:19:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:22.690 17:19:31 -- target/device_removal.sh@187 -- # ib_count=2 00:15:22.690 17:19:31 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:15:22.690 17:19:31 -- target/device_removal.sh@189 -- # break 00:15:22.690 17:19:31 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:15:22.690 17:19:31 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:15:22.690 17:19:31 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:15:22.690 17:19:31 -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:15:22.690 17:19:31 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:15:22.690 17:19:31 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:15:22.690 17:19:31 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:15:22.690 17:19:31 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1/infiniband 00:15:22.690 17:19:31 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:15:22.690 17:19:31 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:15:22.690 17:19:31 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:22.690 17:19:31 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:22.690 17:19:31 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:22.690 17:19:31 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:22.690 17:19:31 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:15:22.690 17:19:31 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:15:22.690 17:19:31 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:15:22.690 17:19:31 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:15:22.690 17:19:31 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1 00:15:22.690 17:19:31 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:15:22.690 17:19:31 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:15:22.690 17:19:31 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:15:22.690 17:19:31 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:15:22.690 17:19:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.690 17:19:31 -- target/device_removal.sh@77 -- # grep mlx5_1 00:15:22.690 17:19:31 -- common/autotest_common.sh@10 -- # set +x 00:15:22.690 17:19:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:22.690 mlx5_1 00:15:22.690 17:19:31 -- target/device_removal.sh@78 -- # return 0 00:15:22.690 17:19:31 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:15:22.690 17:19:31 -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:15:22.690 17:19:31 -- target/device_removal.sh@67 -- # echo 1 00:15:22.690 17:19:31 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:15:22.690 17:19:31 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:15:22.690 17:19:31 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:da:00.1/net/mlx_0_1/device 00:15:22.690 [2024-04-24 17:19:31.791411] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:15:22.690 [2024-04-24 17:19:31.794069] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:15:22.690 [2024-04-24 17:19:31.797062] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:22.691 [2024-04-24 17:19:31.797075] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 94 00:15:29.238 17:19:37 -- target/device_removal.sh@147 -- # seq 1 10 00:15:29.238 17:19:37 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:15:29.238 17:19:37 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:15:29.238 17:19:37 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:15:29.238 17:19:37 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:15:29.238 17:19:37 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:15:29.238 17:19:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:29.238 17:19:37 -- target/device_removal.sh@77 -- # grep mlx5_1 00:15:29.238 17:19:37 -- common/autotest_common.sh@10 -- # set +x 00:15:29.238 17:19:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:29.238 17:19:37 -- target/device_removal.sh@78 -- # return 1 00:15:29.238 17:19:37 -- target/device_removal.sh@149 -- # break 00:15:29.238 17:19:37 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:15:29.238 17:19:37 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:15:29.238 17:19:37 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:15:29.238 17:19:37 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:15:29.238 17:19:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:29.238 17:19:37 -- common/autotest_common.sh@10 -- # set +x 00:15:29.238 17:19:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:29.238 17:19:37 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:15:29.238 17:19:37 -- target/device_removal.sh@160 -- # rescan_pci 00:15:29.238 17:19:37 -- target/device_removal.sh@57 -- # echo 1 00:15:30.170 17:19:39 -- target/device_removal.sh@162 -- # seq 1 10 00:15:30.170 17:19:39 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:15:30.170 17:19:39 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.1/net 00:15:30.170 17:19:39 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:15:30.170 17:19:39 -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:15:30.170 17:19:39 -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:15:30.170 17:19:39 -- target/device_removal.sh@171 -- # break 00:15:30.170 17:19:39 -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:15:30.170 17:19:39 -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:15:30.170 17:19:39 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:15:30.170 17:19:39 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:30.170 17:19:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:30.170 17:19:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:30.171 17:19:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:30.171 17:19:39 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:15:30.171 17:19:39 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:15:30.171 17:19:39 -- target/device_removal.sh@186 -- # seq 1 10 00:15:30.171 17:19:39 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:15:30.171 17:19:39 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:15:30.171 17:19:39 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:15:30.171 17:19:39 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:15:30.171 17:19:39 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:15:30.171 17:19:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.171 17:19:39 -- common/autotest_common.sh@10 -- # set +x 00:15:30.428 [2024-04-24 17:19:39.426014] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19d6bc0/0x1a19430) succeed. 00:15:30.428 [2024-04-24 17:19:39.430480] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:15:30.428 [2024-04-24 17:19:39.430498] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:15:30.428 17:19:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.428 17:19:39 -- target/device_removal.sh@187 -- # ib_count=2 00:15:30.428 17:19:39 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:15:30.428 17:19:39 -- target/device_removal.sh@189 -- # break 00:15:30.428 17:19:39 -- target/device_removal.sh@200 -- # stop_bdevperf 00:15:30.428 17:19:39 -- target/device_removal.sh@116 -- # wait 3020453 00:16:38.130 0 00:16:38.130 17:20:46 -- target/device_removal.sh@118 -- # killprocess 3020425 00:16:38.130 17:20:46 -- common/autotest_common.sh@936 -- # '[' -z 3020425 ']' 00:16:38.130 17:20:46 -- common/autotest_common.sh@940 -- # kill -0 3020425 00:16:38.130 17:20:46 -- common/autotest_common.sh@941 -- # uname 00:16:38.130 17:20:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:38.130 17:20:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3020425 00:16:38.130 17:20:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:38.130 17:20:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:38.130 17:20:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3020425' 00:16:38.130 killing process with pid 3020425 00:16:38.130 17:20:46 -- common/autotest_common.sh@955 -- # kill 3020425 00:16:38.130 17:20:46 -- common/autotest_common.sh@960 -- # wait 3020425 00:16:38.130 17:20:47 -- target/device_removal.sh@119 -- # bdevperf_pid= 00:16:38.130 17:20:47 -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:16:38.130 [2024-04-24 17:19:15.482416] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:16:38.130 [2024-04-24 17:19:15.482459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3020425 ] 00:16:38.130 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.130 [2024-04-24 17:19:15.531725] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.130 [2024-04-24 17:19:15.602355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.130 Running I/O for 90 seconds... 00:16:38.130 [2024-04-24 17:19:21.627470] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:38.130 [2024-04-24 17:19:21.627507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.130 [2024-04-24 17:19:21.627517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32716 cdw0:16 sqhd:c3b9 p:0 m:0 dnr:0 00:16:38.130 [2024-04-24 17:19:21.627526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.131 [2024-04-24 17:19:21.627536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32716 cdw0:16 sqhd:c3b9 p:0 m:0 dnr:0 00:16:38.131 [2024-04-24 17:19:21.627546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.131 [2024-04-24 17:19:21.627553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32716 cdw0:16 sqhd:c3b9 p:0 m:0 dnr:0 00:16:38.131 [2024-04-24 17:19:21.627560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.131 [2024-04-24 17:19:21.627566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32716 cdw0:16 sqhd:c3b9 p:0 m:0 dnr:0 00:16:38.131 [2024-04-24 17:19:21.630414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:38.131 [2024-04-24 17:19:21.630438] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:38.131 [2024-04-24 17:19:21.630461] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:38.131 [2024-04-24 17:19:21.637467] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.647493] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.657516] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.668114] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.678364] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.688534] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.698705] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.708840] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.718865] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.728909] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.739121] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.749282] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.759552] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.769663] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.779882] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.790070] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.800200] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.810440] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.820606] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.830768] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.840835] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.851188] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.861646] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.871782] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.881809] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.891836] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.901920] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.911997] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.922395] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.932934] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.942959] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.952983] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.963256] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.973524] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.983602] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:21.993688] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.003902] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.014077] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.024641] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.034793] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.044946] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.054976] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.065209] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.075345] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.086416] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.096442] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.106469] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.116494] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.126550] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.136800] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.147130] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.157344] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.167496] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.177711] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.187849] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.198072] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.208300] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.218545] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.228655] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.238860] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.249044] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.259231] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.269425] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.279583] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.289843] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.300021] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.310213] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.320545] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.330835] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.341096] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.351292] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.361679] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.371848] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.382168] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.392370] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.402696] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.412914] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.423118] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.433330] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.443545] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.131 [2024-04-24 17:19:22.453749] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.463949] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.474288] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.484566] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.494770] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.504912] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.515122] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.525334] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.535568] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.545795] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.555956] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.566331] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.576574] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.586756] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.596962] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.607130] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.617379] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.627553] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.132 [2024-04-24 17:19:22.632929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:203680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e8000 len:0x1000 key:0x6f7d6 00:16:38.132 [2024-04-24 17:19:22.632946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.632963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:203688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ea000 len:0x1000 key:0x6f7d6 00:16:38.132 [2024-04-24 17:19:22.632974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.632983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:203696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ec000 len:0x1000 key:0x6f7d6 00:16:38.132 [2024-04-24 17:19:22.632989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.632998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:203704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ee000 len:0x1000 key:0x6f7d6 00:16:38.132 [2024-04-24 17:19:22.633004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:203712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f0000 len:0x1000 key:0x6f7d6 00:16:38.132 [2024-04-24 17:19:22.633018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:203720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f2000 len:0x1000 key:0x6f7d6 00:16:38.132 [2024-04-24 17:19:22.633033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:203728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f4000 len:0x1000 key:0x6f7d6 00:16:38.132 [2024-04-24 17:19:22.633047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:203736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f6000 len:0x1000 key:0x6f7d6 00:16:38.132 [2024-04-24 17:19:22.633061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:203744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f8000 len:0x1000 key:0x6f7d6 00:16:38.132 [2024-04-24 17:19:22.633076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:203752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fa000 len:0x1000 key:0x6f7d6 00:16:38.132 [2024-04-24 17:19:22.633089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:203760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fc000 len:0x1000 key:0x6f7d6 00:16:38.132 [2024-04-24 17:19:22.633104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:203768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fe000 len:0x1000 key:0x6f7d6 00:16:38.132 [2024-04-24 17:19:22.633118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:203776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:203784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:203792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:203800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:203808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:203816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:203824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:203832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:203840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:203848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:203856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:203864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:203872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:203880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:203888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:203896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:203904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:203912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.132 [2024-04-24 17:19:22.633385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:203920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.132 [2024-04-24 17:19:22.633391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:203928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:203936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:203944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:203952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:203960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:203968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:203976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:203984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:203992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:204000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:204008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:204016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:204024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:204032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:204040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:204048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:204056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:204064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:204072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:204080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:204088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:204096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:204104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:204112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:204120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:204128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:204136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:204144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:204152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:204160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:204168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:204176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:204184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:204192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:204200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:204208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:204216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:204224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.133 [2024-04-24 17:19:22.633944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.133 [2024-04-24 17:19:22.633952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:204232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.633958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.633966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:204240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.633973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.633980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:204248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.633986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.633995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:204256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:204264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:204272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:204280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:204288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:204296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:204304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:204312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:204320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:204328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:204336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:204344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:204352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:204360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:204368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:204376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:204384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:204392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:204400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:204408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:204416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:204424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:204432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:204440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:204448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:204456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:204464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:204472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:204480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:204488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:204496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:204504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:204512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:204520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:204528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:204536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.134 [2024-04-24 17:19:22.634509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.134 [2024-04-24 17:19:22.634516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:204544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:204552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:204560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:204568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:204576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:204584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:204592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:204600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:204608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:204616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:204624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:204632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:204640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:204648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:204656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:204664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:204672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:204680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.634777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:204688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.135 [2024-04-24 17:19:22.634784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.647477] rdma_verbs.c: 83:spdk_rdma_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:16:38.135 [2024-04-24 17:19:22.647553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:38.135 [2024-04-24 17:19:22.647561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:38.135 [2024-04-24 17:19:22.647568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:204696 len:8 PRP1 0x0 PRP2 0x0 00:16:38.135 [2024-04-24 17:19:22.647575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.135 [2024-04-24 17:19:22.650293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:38.135 [2024-04-24 17:19:22.650641] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:38.135 [2024-04-24 17:19:22.650653] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:38.135 [2024-04-24 17:19:22.650659] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:16:38.135 [2024-04-24 17:19:22.650674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:38.135 [2024-04-24 17:19:22.650681] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:38.135 [2024-04-24 17:19:22.650691] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:38.135 [2024-04-24 17:19:22.650697] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:38.135 [2024-04-24 17:19:22.650704] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:38.135 [2024-04-24 17:19:22.650724] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:38.135 [2024-04-24 17:19:22.650731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:38.135 [2024-04-24 17:19:23.653541] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:38.135 [2024-04-24 17:19:23.653580] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:38.135 [2024-04-24 17:19:23.653586] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:16:38.135 [2024-04-24 17:19:23.653603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:38.135 [2024-04-24 17:19:23.653611] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:38.135 [2024-04-24 17:19:23.653621] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:38.135 [2024-04-24 17:19:23.653627] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:38.135 [2024-04-24 17:19:23.653634] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:38.135 [2024-04-24 17:19:23.653654] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:38.135 [2024-04-24 17:19:23.653662] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:38.136 [2024-04-24 17:19:24.656172] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:38.136 [2024-04-24 17:19:24.656204] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:38.136 [2024-04-24 17:19:24.656210] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:16:38.136 [2024-04-24 17:19:24.656242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:38.136 [2024-04-24 17:19:24.656250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:38.136 [2024-04-24 17:19:24.656260] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:38.136 [2024-04-24 17:19:24.656266] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:38.136 [2024-04-24 17:19:24.656273] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:38.136 [2024-04-24 17:19:24.656294] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:38.136 [2024-04-24 17:19:24.656302] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:38.136 [2024-04-24 17:19:26.662124] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:38.136 [2024-04-24 17:19:26.662159] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:16:38.136 [2024-04-24 17:19:26.662181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:38.136 [2024-04-24 17:19:26.662189] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:38.136 [2024-04-24 17:19:26.662374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:38.136 [2024-04-24 17:19:26.662382] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:38.136 [2024-04-24 17:19:26.662388] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:38.136 [2024-04-24 17:19:26.662419] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:38.136 [2024-04-24 17:19:26.662428] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:38.136 [2024-04-24 17:19:28.668134] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:38.136 [2024-04-24 17:19:28.668165] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:16:38.136 [2024-04-24 17:19:28.668192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:38.136 [2024-04-24 17:19:28.668200] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:38.136 [2024-04-24 17:19:28.668541] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:38.136 [2024-04-24 17:19:28.668550] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:38.136 [2024-04-24 17:19:28.668556] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:38.136 [2024-04-24 17:19:28.668588] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:38.136 [2024-04-24 17:19:28.668597] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:38.136 [2024-04-24 17:19:30.673550] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:38.136 [2024-04-24 17:19:30.673582] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:16:38.136 [2024-04-24 17:19:30.673603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:38.136 [2024-04-24 17:19:30.673610] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:16:38.136 [2024-04-24 17:19:30.673621] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:16:38.136 [2024-04-24 17:19:30.673628] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:16:38.136 [2024-04-24 17:19:30.673635] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:16:38.136 [2024-04-24 17:19:30.673656] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:38.136 [2024-04-24 17:19:30.673664] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:16:38.136 [2024-04-24 17:19:31.732960] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:38.136 [2024-04-24 17:19:31.796957] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:38.136 [2024-04-24 17:19:31.796981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.136 [2024-04-24 17:19:31.796990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32716 cdw0:16 sqhd:c3b9 p:0 m:0 dnr:0 00:16:38.136 [2024-04-24 17:19:31.796998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.136 [2024-04-24 17:19:31.797004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32716 cdw0:16 sqhd:c3b9 p:0 m:0 dnr:0 00:16:38.136 [2024-04-24 17:19:31.797012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.136 [2024-04-24 17:19:31.797018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32716 cdw0:16 sqhd:c3b9 p:0 m:0 dnr:0 00:16:38.136 [2024-04-24 17:19:31.797025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.136 [2024-04-24 17:19:31.797031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32716 cdw0:16 sqhd:c3b9 p:0 m:0 dnr:0 00:16:38.136 [2024-04-24 17:19:31.799022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:38.136 [2024-04-24 17:19:31.799057] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:38.136 [2024-04-24 17:19:31.799112] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:38.136 [2024-04-24 17:19:31.806964] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.816989] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.827015] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.837039] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.847064] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.857090] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.867115] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.877142] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.887167] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.897196] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.907223] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.917249] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.927275] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.937303] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.947330] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.957355] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.967382] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.977409] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.987434] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:31.997459] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.007484] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.017509] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.027536] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.037563] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.047590] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.057618] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.067645] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.077671] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.087696] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.097721] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.107746] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.117771] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.127798] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.137829] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.147850] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.157876] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.167901] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.177929] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.187954] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.136 [2024-04-24 17:19:32.197980] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.208006] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.218034] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.228060] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.238087] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.248113] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.258138] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.268164] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.278191] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.288218] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.298246] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.308272] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.318297] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.328325] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.338352] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.348376] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.358404] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.368429] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.378456] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.388483] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.398510] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.408535] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.418563] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.428590] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.438616] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.448641] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.458666] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.468692] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.478720] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.488745] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.498770] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.508796] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.518821] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.528850] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.538875] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.548902] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.558929] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.568957] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.578984] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.589010] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.599036] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.609063] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.619090] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.629116] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.639144] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.649170] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.659196] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.669223] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.679251] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.689899] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.700216] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.710244] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.720336] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.730383] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.740659] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.751052] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.761316] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.771682] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.781882] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.793144] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:38.137 [2024-04-24 17:19:32.801578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fc000 len:0x1000 key:0x18d0cf 00:16:38.137 [2024-04-24 17:19:32.801596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.137 [2024-04-24 17:19:32.801610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fe000 len:0x1000 key:0x18d0cf 00:16:38.137 [2024-04-24 17:19:32.801616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.137 [2024-04-24 17:19:32.801625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.137 [2024-04-24 17:19:32.801631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.137 [2024-04-24 17:19:32.801639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.137 [2024-04-24 17:19:32.801645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.137 [2024-04-24 17:19:32.801653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.137 [2024-04-24 17:19:32.801659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.137 [2024-04-24 17:19:32.801667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.137 [2024-04-24 17:19:32.801673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.137 [2024-04-24 17:19:32.801681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.137 [2024-04-24 17:19:32.801687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.137 [2024-04-24 17:19:32.801695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.137 [2024-04-24 17:19:32.801701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.137 [2024-04-24 17:19:32.801712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.137 [2024-04-24 17:19:32.801718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.137 [2024-04-24 17:19:32.801726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.137 [2024-04-24 17:19:32.801732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.137 [2024-04-24 17:19:32.801740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.137 [2024-04-24 17:19:32.801746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.137 [2024-04-24 17:19:32.801753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.137 [2024-04-24 17:19:32.801759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.137 [2024-04-24 17:19:32.801767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.137 [2024-04-24 17:19:32.801773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.137 [2024-04-24 17:19:32.801781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.137 [2024-04-24 17:19:32.801787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.801795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.801801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.801810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.801816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.801824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.801834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.801841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.801848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.801857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.801863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.801871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.801877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.801887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.801894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.801901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.801908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.801916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.801922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.801930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.801936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.801944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.801950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.801958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.801964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.801972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.801978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.801987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.801993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.138 [2024-04-24 17:19:32.802332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.138 [2024-04-24 17:19:32.802339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.139 [2024-04-24 17:19:32.802733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.139 [2024-04-24 17:19:32.802741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.802983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.802990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.140 [2024-04-24 17:19:32.803303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.140 [2024-04-24 17:19:32.803311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.141 [2024-04-24 17:19:32.803318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.141 [2024-04-24 17:19:32.803325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.141 [2024-04-24 17:19:32.803332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.141 [2024-04-24 17:19:32.803339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.141 [2024-04-24 17:19:32.803345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.141 [2024-04-24 17:19:32.803353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.141 [2024-04-24 17:19:32.803358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.141 [2024-04-24 17:19:32.803366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.141 [2024-04-24 17:19:32.803373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.141 [2024-04-24 17:19:32.803380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.141 [2024-04-24 17:19:32.803387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.141 [2024-04-24 17:19:32.803394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.141 [2024-04-24 17:19:32.803400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.141 [2024-04-24 17:19:32.803408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:38.141 [2024-04-24 17:19:32.803414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32716 cdw0:a32d0110 sqhd:9530 p:0 m:0 dnr:0 00:16:38.141 [2024-04-24 17:19:32.816152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:38.141 [2024-04-24 17:19:32.816164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:38.141 [2024-04-24 17:19:32.816170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40936 len:8 PRP1 0x0 PRP2 0x0 00:16:38.141 [2024-04-24 17:19:32.816177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.141 [2024-04-24 17:19:32.816221] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:38.141 [2024-04-24 17:19:32.818351] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:38.141 [2024-04-24 17:19:32.818367] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:38.141 [2024-04-24 17:19:32.818376] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:16:38.141 [2024-04-24 17:19:32.818389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:38.141 [2024-04-24 17:19:32.818395] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:38.141 [2024-04-24 17:19:32.818411] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:38.141 [2024-04-24 17:19:32.818417] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:38.141 [2024-04-24 17:19:32.818424] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:38.141 [2024-04-24 17:19:32.818441] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:38.141 [2024-04-24 17:19:32.818448] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:38.141 [2024-04-24 17:19:33.821349] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:38.141 [2024-04-24 17:19:33.821385] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:38.141 [2024-04-24 17:19:33.821391] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:16:38.141 [2024-04-24 17:19:33.821425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:38.141 [2024-04-24 17:19:33.821433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:38.141 [2024-04-24 17:19:33.821443] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:38.141 [2024-04-24 17:19:33.821449] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:38.141 [2024-04-24 17:19:33.821456] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:38.141 [2024-04-24 17:19:33.821477] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:38.141 [2024-04-24 17:19:33.821485] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:38.141 [2024-04-24 17:19:34.824015] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:16:38.141 [2024-04-24 17:19:34.824050] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:38.141 [2024-04-24 17:19:34.824056] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:16:38.141 [2024-04-24 17:19:34.824076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:38.141 [2024-04-24 17:19:34.824083] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:38.141 [2024-04-24 17:19:34.824093] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:38.141 [2024-04-24 17:19:34.824099] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:38.141 [2024-04-24 17:19:34.824106] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:38.141 [2024-04-24 17:19:34.824127] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:38.141 [2024-04-24 17:19:34.824135] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:38.141 [2024-04-24 17:19:36.831922] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:38.141 [2024-04-24 17:19:36.831965] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:16:38.141 [2024-04-24 17:19:36.831986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:38.141 [2024-04-24 17:19:36.831994] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:38.141 [2024-04-24 17:19:36.832015] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:38.141 [2024-04-24 17:19:36.832022] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:38.141 [2024-04-24 17:19:36.832029] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:38.141 [2024-04-24 17:19:36.832062] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:38.141 [2024-04-24 17:19:36.832070] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:38.141 [2024-04-24 17:19:38.838358] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:16:38.141 [2024-04-24 17:19:38.838393] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:16:38.141 [2024-04-24 17:19:38.838415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:38.141 [2024-04-24 17:19:38.838423] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:16:38.141 [2024-04-24 17:19:38.838768] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:16:38.141 [2024-04-24 17:19:38.838776] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:16:38.141 [2024-04-24 17:19:38.838784] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:16:38.141 [2024-04-24 17:19:38.839134] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:38.141 [2024-04-24 17:19:38.839144] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:16:38.141 [2024-04-24 17:19:40.095878] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:38.141 00:16:38.141 Latency(us) 00:16:38.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.141 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:38.141 Verification LBA range: start 0x0 length 0x8000 00:16:38.141 Nvme_mlx_0_0n1 : 90.01 10847.73 42.37 0.00 0.00 11778.51 2012.89 11056984.26 00:16:38.141 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:38.141 Verification LBA range: start 0x0 length 0x8000 00:16:38.141 Nvme_mlx_0_1n1 : 90.01 10019.07 39.14 0.00 0.00 12755.86 2215.74 9075674.94 00:16:38.141 =================================================================================================================== 00:16:38.141 Total : 20866.80 81.51 0.00 0.00 12247.79 2012.89 11056984.26 00:16:38.141 Received shutdown signal, test time was about 90.000000 seconds 00:16:38.141 00:16:38.142 Latency(us) 00:16:38.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.142 =================================================================================================================== 00:16:38.142 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:38.142 17:20:47 -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:16:38.142 17:20:47 -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:16:38.142 17:20:47 -- target/device_removal.sh@202 -- # killprocess 3020369 00:16:38.142 17:20:47 -- common/autotest_common.sh@936 -- # '[' -z 3020369 ']' 00:16:38.142 17:20:47 -- common/autotest_common.sh@940 -- # kill -0 3020369 00:16:38.142 17:20:47 -- common/autotest_common.sh@941 -- # uname 00:16:38.142 17:20:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:38.142 17:20:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3020369 00:16:38.142 17:20:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:38.142 17:20:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:38.142 17:20:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3020369' 00:16:38.142 killing process with pid 3020369 00:16:38.142 17:20:47 -- common/autotest_common.sh@955 -- # kill 3020369 00:16:38.142 17:20:47 -- common/autotest_common.sh@960 -- # wait 3020369 00:16:38.402 17:20:47 -- target/device_removal.sh@203 -- # nvmfpid= 00:16:38.402 17:20:47 -- target/device_removal.sh@205 -- # return 0 00:16:38.402 00:16:38.402 real 1m33.251s 00:16:38.402 user 4m28.356s 00:16:38.402 sys 0m3.892s 00:16:38.402 17:20:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:38.402 17:20:47 -- common/autotest_common.sh@10 -- # set +x 00:16:38.402 ************************************ 00:16:38.402 END TEST nvmf_device_removal_pci_remove 00:16:38.402 ************************************ 00:16:38.402 17:20:47 -- target/device_removal.sh@317 -- # nvmftestfini 00:16:38.402 17:20:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:38.402 17:20:47 -- nvmf/common.sh@117 -- # sync 00:16:38.402 17:20:47 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:38.402 17:20:47 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:38.402 17:20:47 -- nvmf/common.sh@120 -- # set +e 00:16:38.402 17:20:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:38.402 17:20:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:38.402 rmmod nvme_rdma 00:16:38.402 rmmod nvme_fabrics 00:16:38.402 17:20:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.402 17:20:47 -- nvmf/common.sh@124 -- # set -e 00:16:38.402 17:20:47 -- nvmf/common.sh@125 -- # return 0 00:16:38.402 17:20:47 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:38.402 17:20:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:38.402 17:20:47 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:16:38.402 17:20:47 -- target/device_removal.sh@318 -- # clean_bond_device 00:16:38.402 17:20:47 -- target/device_removal.sh@240 -- # ip link 00:16:38.402 17:20:47 -- target/device_removal.sh@240 -- # grep bond_nvmf 00:16:38.402 00:16:38.402 real 3m12.345s 00:16:38.402 user 8m59.413s 00:16:38.402 sys 0m11.844s 00:16:38.402 17:20:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:38.402 17:20:47 -- common/autotest_common.sh@10 -- # set +x 00:16:38.402 ************************************ 00:16:38.402 END TEST nvmf_device_removal 00:16:38.402 ************************************ 00:16:38.402 17:20:47 -- nvmf/nvmf.sh@79 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:16:38.402 17:20:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:38.402 17:20:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.402 17:20:47 -- common/autotest_common.sh@10 -- # set +x 00:16:38.661 ************************************ 00:16:38.661 START TEST nvmf_srq_overwhelm 00:16:38.661 ************************************ 00:16:38.661 17:20:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:16:38.661 * Looking for test storage... 00:16:38.661 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:38.661 17:20:47 -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.661 17:20:47 -- nvmf/common.sh@7 -- # uname -s 00:16:38.661 17:20:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.661 17:20:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.661 17:20:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.661 17:20:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.661 17:20:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.661 17:20:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.661 17:20:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.661 17:20:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.661 17:20:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.661 17:20:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.661 17:20:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:38.661 17:20:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:16:38.661 17:20:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.661 17:20:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.661 17:20:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.661 17:20:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.661 17:20:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:38.661 17:20:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.661 17:20:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.661 17:20:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.661 17:20:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.661 17:20:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.661 17:20:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.661 17:20:47 -- paths/export.sh@5 -- # export PATH 00:16:38.661 17:20:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.661 17:20:47 -- nvmf/common.sh@47 -- # : 0 00:16:38.661 17:20:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:38.661 17:20:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:38.661 17:20:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.661 17:20:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.661 17:20:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.661 17:20:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:38.661 17:20:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:38.661 17:20:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:38.661 17:20:47 -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.661 17:20:47 -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.661 17:20:47 -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:16:38.661 17:20:47 -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:16:38.661 17:20:47 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:16:38.661 17:20:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.661 17:20:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:38.661 17:20:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:38.661 17:20:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:38.661 17:20:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.661 17:20:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.661 17:20:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.661 17:20:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:38.661 17:20:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:38.661 17:20:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:38.661 17:20:47 -- common/autotest_common.sh@10 -- # set +x 00:16:43.928 17:20:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:43.928 17:20:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:43.928 17:20:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:43.928 17:20:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:44.188 17:20:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:44.188 17:20:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:44.188 17:20:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:44.188 17:20:53 -- nvmf/common.sh@295 -- # net_devs=() 00:16:44.188 17:20:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:44.188 17:20:53 -- nvmf/common.sh@296 -- # e810=() 00:16:44.188 17:20:53 -- nvmf/common.sh@296 -- # local -ga e810 00:16:44.188 17:20:53 -- nvmf/common.sh@297 -- # x722=() 00:16:44.188 17:20:53 -- nvmf/common.sh@297 -- # local -ga x722 00:16:44.188 17:20:53 -- nvmf/common.sh@298 -- # mlx=() 00:16:44.188 17:20:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:44.188 17:20:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.188 17:20:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.188 17:20:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.188 17:20:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.188 17:20:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.188 17:20:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.188 17:20:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.188 17:20:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.188 17:20:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.188 17:20:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.188 17:20:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.188 17:20:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:44.188 17:20:53 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:44.188 17:20:53 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:44.188 17:20:53 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:44.188 17:20:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:44.188 17:20:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:44.188 17:20:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:16:44.188 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:16:44.188 17:20:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:44.188 17:20:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:44.188 17:20:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:16:44.188 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:16:44.188 17:20:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:44.188 17:20:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:44.188 17:20:53 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:44.188 17:20:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.188 17:20:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:44.188 17:20:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.188 17:20:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:16:44.188 Found net devices under 0000:da:00.0: mlx_0_0 00:16:44.188 17:20:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.188 17:20:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:44.188 17:20:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.188 17:20:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:44.188 17:20:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.188 17:20:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:16:44.188 Found net devices under 0000:da:00.1: mlx_0_1 00:16:44.188 17:20:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.188 17:20:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:44.188 17:20:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:44.188 17:20:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:16:44.188 17:20:53 -- nvmf/common.sh@409 -- # rdma_device_init 00:16:44.188 17:20:53 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:16:44.188 17:20:53 -- nvmf/common.sh@58 -- # uname 00:16:44.188 17:20:53 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:44.188 17:20:53 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:44.188 17:20:53 -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:44.188 17:20:53 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:44.188 17:20:53 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:44.188 17:20:53 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:44.188 17:20:53 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:44.188 17:20:53 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:44.188 17:20:53 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:16:44.188 17:20:53 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:44.188 17:20:53 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:44.188 17:20:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:44.188 17:20:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:44.188 17:20:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:44.188 17:20:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:44.188 17:20:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:44.189 17:20:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:44.189 17:20:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.189 17:20:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:44.189 17:20:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:44.189 17:20:53 -- nvmf/common.sh@105 -- # continue 2 00:16:44.189 17:20:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:44.189 17:20:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.189 17:20:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:44.189 17:20:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.189 17:20:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:44.189 17:20:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:44.189 17:20:53 -- nvmf/common.sh@105 -- # continue 2 00:16:44.189 17:20:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:44.189 17:20:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:44.189 17:20:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:44.189 17:20:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:44.189 17:20:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:44.189 17:20:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:44.189 17:20:53 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:44.189 17:20:53 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:44.189 17:20:53 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:44.189 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:44.189 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:16:44.189 altname enp218s0f0np0 00:16:44.189 altname ens818f0np0 00:16:44.189 inet 192.168.100.8/24 scope global mlx_0_0 00:16:44.189 valid_lft forever preferred_lft forever 00:16:44.189 17:20:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:44.189 17:20:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:44.189 17:20:53 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:44.189 17:20:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:44.189 17:20:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:44.189 17:20:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:44.189 17:20:53 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:44.189 17:20:53 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:44.189 17:20:53 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:44.189 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:44.189 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:16:44.189 altname enp218s0f1np1 00:16:44.189 altname ens818f1np1 00:16:44.189 inet 192.168.100.9/24 scope global mlx_0_1 00:16:44.189 valid_lft forever preferred_lft forever 00:16:44.189 17:20:53 -- nvmf/common.sh@411 -- # return 0 00:16:44.189 17:20:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:44.189 17:20:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:44.189 17:20:53 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:16:44.189 17:20:53 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:16:44.189 17:20:53 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:44.189 17:20:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:44.189 17:20:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:44.189 17:20:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:44.189 17:20:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:44.189 17:20:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:44.189 17:20:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:44.189 17:20:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.189 17:20:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:44.189 17:20:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:44.189 17:20:53 -- nvmf/common.sh@105 -- # continue 2 00:16:44.189 17:20:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:44.189 17:20:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.189 17:20:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:44.189 17:20:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.189 17:20:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:44.189 17:20:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:44.189 17:20:53 -- nvmf/common.sh@105 -- # continue 2 00:16:44.189 17:20:53 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:44.189 17:20:53 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:44.189 17:20:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:44.189 17:20:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:44.189 17:20:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:44.189 17:20:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:44.189 17:20:53 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:44.189 17:20:53 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:44.189 17:20:53 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:44.189 17:20:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:44.189 17:20:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:44.189 17:20:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:44.189 17:20:53 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:16:44.189 192.168.100.9' 00:16:44.189 17:20:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:44.189 192.168.100.9' 00:16:44.189 17:20:53 -- nvmf/common.sh@446 -- # head -n 1 00:16:44.189 17:20:53 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:44.189 17:20:53 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:16:44.189 192.168.100.9' 00:16:44.189 17:20:53 -- nvmf/common.sh@447 -- # tail -n +2 00:16:44.189 17:20:53 -- nvmf/common.sh@447 -- # head -n 1 00:16:44.189 17:20:53 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:44.189 17:20:53 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:16:44.189 17:20:53 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:44.189 17:20:53 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:16:44.189 17:20:53 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:16:44.189 17:20:53 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:16:44.189 17:20:53 -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:16:44.189 17:20:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:44.189 17:20:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:44.189 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:16:44.189 17:20:53 -- nvmf/common.sh@470 -- # nvmfpid=3024107 00:16:44.189 17:20:53 -- nvmf/common.sh@471 -- # waitforlisten 3024107 00:16:44.189 17:20:53 -- common/autotest_common.sh@817 -- # '[' -z 3024107 ']' 00:16:44.189 17:20:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.189 17:20:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:44.189 17:20:53 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:44.189 17:20:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.189 17:20:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:44.189 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:16:44.189 [2024-04-24 17:20:53.402794] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:16:44.189 [2024-04-24 17:20:53.402845] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.189 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.449 [2024-04-24 17:20:53.455530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:44.449 [2024-04-24 17:20:53.533494] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.449 [2024-04-24 17:20:53.533531] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.449 [2024-04-24 17:20:53.533538] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.449 [2024-04-24 17:20:53.533544] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.449 [2024-04-24 17:20:53.533549] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.449 [2024-04-24 17:20:53.533594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.449 [2024-04-24 17:20:53.533613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.449 [2024-04-24 17:20:53.533702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.449 [2024-04-24 17:20:53.533703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.015 17:20:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:45.015 17:20:54 -- common/autotest_common.sh@850 -- # return 0 00:16:45.015 17:20:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:45.015 17:20:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:45.015 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:45.015 17:20:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.015 17:20:54 -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:16:45.015 17:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.015 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:45.273 [2024-04-24 17:20:54.270953] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c28f60/0x1c2d450) succeed. 00:16:45.273 [2024-04-24 17:20:54.281104] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c2a550/0x1c6eae0) succeed. 00:16:45.273 17:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.273 17:20:54 -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:16:45.273 17:20:54 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:45.273 17:20:54 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:16:45.273 17:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.273 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:45.273 17:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.273 17:20:54 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:45.273 17:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.273 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:45.273 Malloc0 00:16:45.273 17:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.273 17:20:54 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:16:45.273 17:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.273 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:45.273 17:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.273 17:20:54 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:45.273 17:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.273 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:45.273 [2024-04-24 17:20:54.381357] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:45.273 17:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.273 17:20:54 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:16:46.208 17:20:55 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:16:46.208 17:20:55 -- common/autotest_common.sh@1221 -- # local i=0 00:16:46.208 17:20:55 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:16:46.209 17:20:55 -- common/autotest_common.sh@1222 -- # grep -q -w nvme0n1 00:16:46.209 17:20:55 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:16:46.209 17:20:55 -- common/autotest_common.sh@1228 -- # grep -q -w nvme0n1 00:16:46.209 17:20:55 -- common/autotest_common.sh@1232 -- # return 0 00:16:46.209 17:20:55 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:46.209 17:20:55 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:46.209 17:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.209 17:20:55 -- common/autotest_common.sh@10 -- # set +x 00:16:46.209 17:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.209 17:20:55 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:46.209 17:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.209 17:20:55 -- common/autotest_common.sh@10 -- # set +x 00:16:46.209 Malloc1 00:16:46.209 17:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.209 17:20:55 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:46.209 17:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.209 17:20:55 -- common/autotest_common.sh@10 -- # set +x 00:16:46.209 17:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.209 17:20:55 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:46.209 17:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.209 17:20:55 -- common/autotest_common.sh@10 -- # set +x 00:16:46.467 17:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.467 17:20:55 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:47.402 17:20:56 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:16:47.402 17:20:56 -- common/autotest_common.sh@1221 -- # local i=0 00:16:47.402 17:20:56 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:16:47.402 17:20:56 -- common/autotest_common.sh@1222 -- # grep -q -w nvme1n1 00:16:47.402 17:20:56 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:16:47.402 17:20:56 -- common/autotest_common.sh@1228 -- # grep -q -w nvme1n1 00:16:47.402 17:20:56 -- common/autotest_common.sh@1232 -- # return 0 00:16:47.402 17:20:56 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:47.402 17:20:56 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:47.402 17:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.402 17:20:56 -- common/autotest_common.sh@10 -- # set +x 00:16:47.402 17:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.402 17:20:56 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:16:47.402 17:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.402 17:20:56 -- common/autotest_common.sh@10 -- # set +x 00:16:47.402 Malloc2 00:16:47.402 17:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.402 17:20:56 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:16:47.402 17:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.402 17:20:56 -- common/autotest_common.sh@10 -- # set +x 00:16:47.402 17:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.402 17:20:56 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:16:47.402 17:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.402 17:20:56 -- common/autotest_common.sh@10 -- # set +x 00:16:47.402 17:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.402 17:20:56 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:16:48.336 17:20:57 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:16:48.336 17:20:57 -- common/autotest_common.sh@1221 -- # local i=0 00:16:48.336 17:20:57 -- common/autotest_common.sh@1222 -- # grep -q -w nvme2n1 00:16:48.336 17:20:57 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:16:48.336 17:20:57 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:16:48.336 17:20:57 -- common/autotest_common.sh@1228 -- # grep -q -w nvme2n1 00:16:48.336 17:20:57 -- common/autotest_common.sh@1232 -- # return 0 00:16:48.336 17:20:57 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:48.336 17:20:57 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:48.336 17:20:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.336 17:20:57 -- common/autotest_common.sh@10 -- # set +x 00:16:48.336 17:20:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.336 17:20:57 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:16:48.336 17:20:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.336 17:20:57 -- common/autotest_common.sh@10 -- # set +x 00:16:48.336 Malloc3 00:16:48.336 17:20:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.336 17:20:57 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:16:48.336 17:20:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.336 17:20:57 -- common/autotest_common.sh@10 -- # set +x 00:16:48.336 17:20:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.336 17:20:57 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:16:48.336 17:20:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.336 17:20:57 -- common/autotest_common.sh@10 -- # set +x 00:16:48.336 17:20:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.336 17:20:57 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:16:49.330 17:20:58 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:16:49.330 17:20:58 -- common/autotest_common.sh@1221 -- # local i=0 00:16:49.330 17:20:58 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:16:49.330 17:20:58 -- common/autotest_common.sh@1222 -- # grep -q -w nvme3n1 00:16:49.330 17:20:58 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:16:49.330 17:20:58 -- common/autotest_common.sh@1228 -- # grep -q -w nvme3n1 00:16:49.330 17:20:58 -- common/autotest_common.sh@1232 -- # return 0 00:16:49.330 17:20:58 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:49.330 17:20:58 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:49.330 17:20:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.330 17:20:58 -- common/autotest_common.sh@10 -- # set +x 00:16:49.330 17:20:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:49.330 17:20:58 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:16:49.330 17:20:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.330 17:20:58 -- common/autotest_common.sh@10 -- # set +x 00:16:49.330 Malloc4 00:16:49.330 17:20:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:49.330 17:20:58 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:16:49.330 17:20:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.330 17:20:58 -- common/autotest_common.sh@10 -- # set +x 00:16:49.330 17:20:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:49.330 17:20:58 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:16:49.330 17:20:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.330 17:20:58 -- common/autotest_common.sh@10 -- # set +x 00:16:49.330 17:20:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:49.330 17:20:58 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:16:50.705 17:20:59 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:16:50.705 17:20:59 -- common/autotest_common.sh@1221 -- # local i=0 00:16:50.705 17:20:59 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:16:50.705 17:20:59 -- common/autotest_common.sh@1222 -- # grep -q -w nvme4n1 00:16:50.705 17:20:59 -- common/autotest_common.sh@1228 -- # grep -q -w nvme4n1 00:16:50.705 17:20:59 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:16:50.705 17:20:59 -- common/autotest_common.sh@1232 -- # return 0 00:16:50.705 17:20:59 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:50.705 17:20:59 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:16:50.705 17:20:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.705 17:20:59 -- common/autotest_common.sh@10 -- # set +x 00:16:50.705 17:20:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.705 17:20:59 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:16:50.705 17:20:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.705 17:20:59 -- common/autotest_common.sh@10 -- # set +x 00:16:50.705 Malloc5 00:16:50.705 17:20:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.705 17:20:59 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:16:50.705 17:20:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.705 17:20:59 -- common/autotest_common.sh@10 -- # set +x 00:16:50.705 17:20:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.705 17:20:59 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:16:50.705 17:20:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.705 17:20:59 -- common/autotest_common.sh@10 -- # set +x 00:16:50.705 17:20:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.705 17:20:59 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:16:51.641 17:21:00 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:16:51.641 17:21:00 -- common/autotest_common.sh@1221 -- # local i=0 00:16:51.641 17:21:00 -- common/autotest_common.sh@1222 -- # grep -q -w nvme5n1 00:16:51.641 17:21:00 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:16:51.641 17:21:00 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:16:51.641 17:21:00 -- common/autotest_common.sh@1228 -- # grep -q -w nvme5n1 00:16:51.641 17:21:00 -- common/autotest_common.sh@1232 -- # return 0 00:16:51.641 17:21:00 -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:16:51.641 [global] 00:16:51.641 thread=1 00:16:51.641 invalidate=1 00:16:51.641 rw=read 00:16:51.641 time_based=1 00:16:51.641 runtime=10 00:16:51.641 ioengine=libaio 00:16:51.641 direct=1 00:16:51.641 bs=1048576 00:16:51.641 iodepth=128 00:16:51.641 norandommap=1 00:16:51.641 numjobs=13 00:16:51.641 00:16:51.641 [job0] 00:16:51.641 filename=/dev/nvme0n1 00:16:51.641 [job1] 00:16:51.641 filename=/dev/nvme1n1 00:16:51.641 [job2] 00:16:51.641 filename=/dev/nvme2n1 00:16:51.641 [job3] 00:16:51.641 filename=/dev/nvme3n1 00:16:51.641 [job4] 00:16:51.641 filename=/dev/nvme4n1 00:16:51.641 [job5] 00:16:51.641 filename=/dev/nvme5n1 00:16:51.641 Could not set queue depth (nvme0n1) 00:16:51.641 Could not set queue depth (nvme1n1) 00:16:51.641 Could not set queue depth (nvme2n1) 00:16:51.641 Could not set queue depth (nvme3n1) 00:16:51.641 Could not set queue depth (nvme4n1) 00:16:51.641 Could not set queue depth (nvme5n1) 00:16:51.899 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:51.899 ... 00:16:51.899 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:51.899 ... 00:16:51.899 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:51.899 ... 00:16:51.899 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:51.899 ... 00:16:51.899 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:51.899 ... 00:16:51.899 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:51.899 ... 00:16:51.899 fio-3.35 00:16:51.899 Starting 78 threads 00:17:06.779 00:17:06.779 job0: (groupid=0, jobs=1): err= 0: pid=3024481: Wed Apr 24 17:21:13 2024 00:17:06.779 read: IOPS=0, BW=825KiB/s (845kB/s)(10.0MiB/12409msec) 00:17:06.779 slat (msec): min=5, max=2171, avg=1025.65, stdev=1066.11 00:17:06.779 clat (msec): min=2152, max=12346, avg=9055.99, stdev=3620.49 00:17:06.779 lat (msec): min=4279, max=12408, avg=10081.64, stdev=2809.32 00:17:06.779 clat percentiles (msec): 00:17:06.779 | 1.00th=[ 2165], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 4279], 00:17:06.779 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10671], 00:17:06.779 | 70.00th=[10671], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:17:06.779 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:17:06.779 | 99.99th=[12281] 00:17:06.779 lat (msec) : >=2000=100.00% 00:17:06.779 cpu : usr=0.00%, sys=0.06%, ctx=46, majf=0, minf=2561 00:17:06.779 IO depths : 1=10.0%, 2=20.0%, 4=40.0%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:06.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.779 issued rwts: total=10,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.779 job0: (groupid=0, jobs=1): err= 0: pid=3024482: Wed Apr 24 17:21:13 2024 00:17:06.779 read: IOPS=5, BW=5588KiB/s (5722kB/s)(57.0MiB/10446msec) 00:17:06.779 slat (usec): min=722, max=3788.5k, avg=181748.48, stdev=672034.77 00:17:06.779 clat (msec): min=85, max=10444, avg=6545.08, stdev=3563.49 00:17:06.779 lat (msec): min=2131, max=10445, avg=6726.83, stdev=3491.62 00:17:06.779 clat percentiles (msec): 00:17:06.779 | 1.00th=[ 86], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2198], 00:17:06.779 | 30.00th=[ 4329], 40.00th=[ 4329], 50.00th=[ 6477], 60.00th=[10268], 00:17:06.779 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:17:06.779 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:17:06.779 | 99.99th=[10402] 00:17:06.779 lat (msec) : 100=1.75%, >=2000=98.25% 00:17:06.779 cpu : usr=0.00%, sys=0.50%, ctx=81, majf=0, minf=14593 00:17:06.779 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:17:06.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.779 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.779 job0: (groupid=0, jobs=1): err= 0: pid=3024483: Wed Apr 24 17:21:13 2024 00:17:06.779 read: IOPS=8, BW=9038KiB/s (9255kB/s)(110MiB/12463msec) 00:17:06.779 slat (usec): min=521, max=2100.5k, avg=93964.57, stdev=405933.44 00:17:06.779 clat (msec): min=2125, max=12461, avg=11690.94, stdev=1922.44 00:17:06.779 lat (msec): min=4193, max=12462, avg=11784.91, stdev=1689.07 00:17:06.779 clat percentiles (msec): 00:17:06.779 | 1.00th=[ 4178], 5.00th=[ 6409], 10.00th=[10671], 20.00th=[12013], 00:17:06.779 | 30.00th=[12013], 40.00th=[12147], 50.00th=[12281], 60.00th=[12416], 00:17:06.779 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:17:06.779 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.779 | 99.99th=[12416] 00:17:06.779 lat (msec) : >=2000=100.00% 00:17:06.779 cpu : usr=0.02%, sys=0.65%, ctx=160, majf=0, minf=28161 00:17:06.779 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.3%, 16=14.5%, 32=29.1%, >=64=42.7% 00:17:06.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:06.779 issued rwts: total=110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.779 job0: (groupid=0, jobs=1): err= 0: pid=3024484: Wed Apr 24 17:21:13 2024 00:17:06.779 read: IOPS=0, BW=823KiB/s (843kB/s)(10.0MiB/12442msec) 00:17:06.779 slat (msec): min=6, max=2188, avg=1032.23, stdev=1071.37 00:17:06.779 clat (msec): min=2119, max=12347, avg=9214.48, stdev=3750.24 00:17:06.779 lat (msec): min=4266, max=12441, avg=10246.72, stdev=2905.78 00:17:06.779 clat percentiles (msec): 00:17:06.779 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 4279], 00:17:06.779 | 30.00th=[ 6409], 40.00th=[ 8658], 50.00th=[10671], 60.00th=[10671], 00:17:06.779 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:17:06.779 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:17:06.779 | 99.99th=[12281] 00:17:06.779 lat (msec) : >=2000=100.00% 00:17:06.779 cpu : usr=0.00%, sys=0.06%, ctx=51, majf=0, minf=2561 00:17:06.779 IO depths : 1=10.0%, 2=20.0%, 4=40.0%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:06.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.779 issued rwts: total=10,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.779 job0: (groupid=0, jobs=1): err= 0: pid=3024485: Wed Apr 24 17:21:13 2024 00:17:06.779 read: IOPS=2, BW=2891KiB/s (2960kB/s)(35.0MiB/12397msec) 00:17:06.779 slat (msec): min=5, max=2077, avg=292.64, stdev=701.22 00:17:06.779 clat (msec): min=2154, max=12391, avg=8232.76, stdev=2945.30 00:17:06.779 lat (msec): min=4223, max=12396, avg=8525.40, stdev=2830.19 00:17:06.779 clat percentiles (msec): 00:17:06.779 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4329], 00:17:06.779 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[ 8557], 00:17:06.779 | 70.00th=[10671], 80.00th=[10805], 90.00th=[12416], 95.00th=[12416], 00:17:06.779 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.779 | 99.99th=[12416] 00:17:06.779 lat (msec) : >=2000=100.00% 00:17:06.779 cpu : usr=0.00%, sys=0.19%, ctx=57, majf=0, minf=8961 00:17:06.779 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:17:06.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.779 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.779 job0: (groupid=0, jobs=1): err= 0: pid=3024486: Wed Apr 24 17:21:13 2024 00:17:06.779 read: IOPS=1, BW=1313KiB/s (1345kB/s)(16.0MiB/12475msec) 00:17:06.779 slat (msec): min=7, max=3788, avg=646.59, stdev=1185.09 00:17:06.779 clat (msec): min=2129, max=12456, avg=8928.69, stdev=3365.84 00:17:06.779 lat (msec): min=4233, max=12474, avg=9575.28, stdev=2939.22 00:17:06.779 clat percentiles (msec): 00:17:06.779 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4245], 20.00th=[ 6409], 00:17:06.780 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[ 8557], 00:17:06.780 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:17:06.780 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.780 | 99.99th=[12416] 00:17:06.780 lat (msec) : >=2000=100.00% 00:17:06.780 cpu : usr=0.00%, sys=0.11%, ctx=50, majf=0, minf=4097 00:17:06.780 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:17:06.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.780 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.780 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.780 job0: (groupid=0, jobs=1): err= 0: pid=3024487: Wed Apr 24 17:21:13 2024 00:17:06.780 read: IOPS=3, BW=3928KiB/s (4022kB/s)(48.0MiB/12514msec) 00:17:06.780 slat (usec): min=706, max=4339.1k, avg=215856.78, stdev=769345.23 00:17:06.780 clat (msec): min=2152, max=12511, avg=11753.91, stdev=2250.69 00:17:06.780 lat (msec): min=4232, max=12513, avg=11969.77, stdev=1751.79 00:17:06.780 clat percentiles (msec): 00:17:06.780 | 1.00th=[ 2165], 5.00th=[ 4279], 10.00th=[10671], 20.00th=[12416], 00:17:06.780 | 30.00th=[12416], 40.00th=[12416], 50.00th=[12416], 60.00th=[12550], 00:17:06.780 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12550], 95.00th=[12550], 00:17:06.780 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:17:06.780 | 99.99th=[12550] 00:17:06.780 lat (msec) : >=2000=100.00% 00:17:06.780 cpu : usr=0.00%, sys=0.35%, ctx=87, majf=0, minf=12289 00:17:06.780 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:17:06.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.780 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.780 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.780 job0: (groupid=0, jobs=1): err= 0: pid=3024488: Wed Apr 24 17:21:13 2024 00:17:06.780 read: IOPS=2, BW=2310KiB/s (2365kB/s)(28.0MiB/12413msec) 00:17:06.780 slat (msec): min=4, max=2141, avg=367.27, stdev=765.42 00:17:06.780 clat (msec): min=2129, max=12356, avg=10681.85, stdev=2840.06 00:17:06.780 lat (msec): min=4266, max=12412, avg=11049.13, stdev=2308.20 00:17:06.780 clat percentiles (msec): 00:17:06.780 | 1.00th=[ 2123], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[ 8557], 00:17:06.780 | 30.00th=[12013], 40.00th=[12013], 50.00th=[12013], 60.00th=[12147], 00:17:06.780 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12416], 00:17:06.780 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.780 | 99.99th=[12416] 00:17:06.780 lat (msec) : >=2000=100.00% 00:17:06.780 cpu : usr=0.01%, sys=0.15%, ctx=98, majf=0, minf=7169 00:17:06.780 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:17:06.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.780 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:06.780 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.780 job0: (groupid=0, jobs=1): err= 0: pid=3024489: Wed Apr 24 17:21:13 2024 00:17:06.780 read: IOPS=4, BW=5022KiB/s (5143kB/s)(61.0MiB/12437msec) 00:17:06.780 slat (usec): min=734, max=2112.2k, avg=168938.01, stdev=548573.71 00:17:06.780 clat (msec): min=2131, max=12435, avg=9484.60, stdev=3110.09 00:17:06.780 lat (msec): min=4186, max=12436, avg=9653.54, stdev=2981.23 00:17:06.780 clat percentiles (msec): 00:17:06.780 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:17:06.780 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:17:06.780 | 70.00th=[12281], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:17:06.780 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.780 | 99.99th=[12416] 00:17:06.780 lat (msec) : >=2000=100.00% 00:17:06.780 cpu : usr=0.00%, sys=0.39%, ctx=72, majf=0, minf=15617 00:17:06.780 IO depths : 1=1.6%, 2=3.3%, 4=6.6%, 8=13.1%, 16=26.2%, 32=49.2%, >=64=0.0% 00:17:06.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.780 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.780 issued rwts: total=61,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.780 job0: (groupid=0, jobs=1): err= 0: pid=3024490: Wed Apr 24 17:21:13 2024 00:17:06.780 read: IOPS=2, BW=2054KiB/s (2103kB/s)(25.0MiB/12464msec) 00:17:06.780 slat (usec): min=689, max=2168.8k, avg=413372.17, stdev=830005.30 00:17:06.780 clat (msec): min=2129, max=12460, avg=11066.67, stdev=2857.96 00:17:06.780 lat (msec): min=4245, max=12463, avg=11480.04, stdev=2177.83 00:17:06.780 clat percentiles (msec): 00:17:06.780 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[ 8658], 00:17:06.780 | 30.00th=[12281], 40.00th=[12416], 50.00th=[12416], 60.00th=[12416], 00:17:06.780 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:17:06.780 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.780 | 99.99th=[12416] 00:17:06.780 lat (msec) : >=2000=100.00% 00:17:06.780 cpu : usr=0.00%, sys=0.18%, ctx=62, majf=0, minf=6401 00:17:06.780 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:17:06.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.780 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:06.780 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.780 job0: (groupid=0, jobs=1): err= 0: pid=3024491: Wed Apr 24 17:21:13 2024 00:17:06.780 read: IOPS=2, BW=3044KiB/s (3118kB/s)(37.0MiB/12445msec) 00:17:06.780 slat (usec): min=650, max=2123.8k, avg=278725.45, stdev=695619.18 00:17:06.780 clat (msec): min=2131, max=12443, avg=10685.32, stdev=2996.74 00:17:06.780 lat (msec): min=4226, max=12444, avg=10964.05, stdev=2637.08 00:17:06.780 clat percentiles (msec): 00:17:06.780 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 8490], 00:17:06.780 | 30.00th=[10671], 40.00th=[12281], 50.00th=[12416], 60.00th=[12416], 00:17:06.780 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:17:06.780 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.780 | 99.99th=[12416] 00:17:06.780 lat (msec) : >=2000=100.00% 00:17:06.780 cpu : usr=0.00%, sys=0.23%, ctx=66, majf=0, minf=9473 00:17:06.780 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:17:06.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.780 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.780 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.780 job0: (groupid=0, jobs=1): err= 0: pid=3024492: Wed Apr 24 17:21:13 2024 00:17:06.780 read: IOPS=1, BW=1238KiB/s (1268kB/s)(15.0MiB/12409msec) 00:17:06.780 slat (msec): min=2, max=2123, avg=685.99, stdev=973.77 00:17:06.780 clat (msec): min=2118, max=12302, avg=8558.98, stdev=3252.26 00:17:06.780 lat (msec): min=4205, max=12408, avg=9244.97, stdev=2857.99 00:17:06.780 clat percentiles (msec): 00:17:06.780 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4212], 20.00th=[ 4245], 00:17:06.780 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8490], 60.00th=[10671], 00:17:06.780 | 70.00th=[10671], 80.00th=[10671], 90.00th=[12281], 95.00th=[12281], 00:17:06.780 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:17:06.780 | 99.99th=[12281] 00:17:06.780 lat (msec) : >=2000=100.00% 00:17:06.780 cpu : usr=0.00%, sys=0.07%, ctx=54, majf=0, minf=3841 00:17:06.780 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:06.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.780 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.780 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.780 job0: (groupid=0, jobs=1): err= 0: pid=3024493: Wed Apr 24 17:21:13 2024 00:17:06.780 read: IOPS=0, BW=659KiB/s (674kB/s)(8192KiB/12438msec) 00:17:06.780 slat (msec): min=9, max=2180, avg=1289.76, stdev=1042.43 00:17:06.780 clat (msec): min=2119, max=12321, avg=8421.69, stdev=3818.46 00:17:06.780 lat (msec): min=4233, max=12437, avg=9711.46, stdev=3050.93 00:17:06.780 clat percentiles (msec): 00:17:06.780 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 4245], 00:17:06.780 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[10671], 00:17:06.780 | 70.00th=[10671], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:17:06.780 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:17:06.780 | 99.99th=[12281] 00:17:06.780 lat (msec) : >=2000=100.00% 00:17:06.780 cpu : usr=0.00%, sys=0.05%, ctx=55, majf=0, minf=2049 00:17:06.780 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:06.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.780 complete : 0=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.780 issued rwts: total=8,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.780 job1: (groupid=0, jobs=1): err= 0: pid=3024494: Wed Apr 24 17:21:13 2024 00:17:06.780 read: IOPS=257, BW=258MiB/s (270MB/s)(3223MiB/12509msec) 00:17:06.780 slat (usec): min=42, max=2104.6k, avg=3223.32, stdev=37272.64 00:17:06.780 clat (msec): min=115, max=4425, avg=480.09, stdev=808.10 00:17:06.780 lat (msec): min=116, max=4425, avg=483.32, stdev=810.44 00:17:06.780 clat percentiles (msec): 00:17:06.780 | 1.00th=[ 117], 5.00th=[ 117], 10.00th=[ 118], 20.00th=[ 118], 00:17:06.780 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 300], 00:17:06.780 | 70.00th=[ 384], 80.00th=[ 600], 90.00th=[ 667], 95.00th=[ 785], 00:17:06.780 | 99.00th=[ 4396], 99.50th=[ 4396], 99.90th=[ 4396], 99.95th=[ 4396], 00:17:06.780 | 99.99th=[ 4396] 00:17:06.780 bw ( KiB/s): min= 1868, max=1103712, per=11.37%, avg=372714.24, stdev=262826.85, samples=17 00:17:06.780 iops : min= 1, max= 1077, avg=363.82, stdev=256.64, samples=17 00:17:06.780 lat (msec) : 250=30.50%, 500=42.29%, 750=21.25%, 1000=1.99%, >=2000=3.97% 00:17:06.780 cpu : usr=0.17%, sys=2.92%, ctx=2999, majf=0, minf=32769 00:17:06.781 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:17:06.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.781 issued rwts: total=3223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.781 job1: (groupid=0, jobs=1): err= 0: pid=3024495: Wed Apr 24 17:21:13 2024 00:17:06.781 read: IOPS=40, BW=40.7MiB/s (42.7MB/s)(508MiB/12480msec) 00:17:06.781 slat (usec): min=434, max=2069.5k, avg=20441.92, stdev=130521.54 00:17:06.781 clat (msec): min=1184, max=7802, avg=2920.16, stdev=2405.33 00:17:06.781 lat (msec): min=1189, max=7804, avg=2940.61, stdev=2409.71 00:17:06.781 clat percentiles (msec): 00:17:06.781 | 1.00th=[ 1234], 5.00th=[ 1284], 10.00th=[ 1334], 20.00th=[ 1385], 00:17:06.781 | 30.00th=[ 1435], 40.00th=[ 1485], 50.00th=[ 1536], 60.00th=[ 1636], 00:17:06.781 | 70.00th=[ 1989], 80.00th=[ 6812], 90.00th=[ 7349], 95.00th=[ 7483], 00:17:06.781 | 99.00th=[ 7684], 99.50th=[ 7752], 99.90th=[ 7819], 99.95th=[ 7819], 00:17:06.781 | 99.99th=[ 7819] 00:17:06.781 bw ( KiB/s): min= 1896, max=120832, per=1.98%, avg=64998.67, stdev=40988.12, samples=12 00:17:06.781 iops : min= 1, max= 118, avg=63.33, stdev=40.13, samples=12 00:17:06.781 lat (msec) : 2000=71.06%, >=2000=28.94% 00:17:06.781 cpu : usr=0.02%, sys=1.02%, ctx=1225, majf=0, minf=32769 00:17:06.781 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:17:06.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.781 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:06.781 issued rwts: total=508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.781 job1: (groupid=0, jobs=1): err= 0: pid=3024496: Wed Apr 24 17:21:13 2024 00:17:06.781 read: IOPS=35, BW=35.6MiB/s (37.4MB/s)(443MiB/12433msec) 00:17:06.781 slat (usec): min=53, max=2092.4k, avg=23199.95, stdev=197023.29 00:17:06.781 clat (msec): min=249, max=10999, avg=3456.82, stdev=4579.64 00:17:06.781 lat (msec): min=250, max=11000, avg=3480.02, stdev=4591.19 00:17:06.781 clat percentiles (msec): 00:17:06.781 | 1.00th=[ 251], 5.00th=[ 259], 10.00th=[ 313], 20.00th=[ 326], 00:17:06.781 | 30.00th=[ 330], 40.00th=[ 456], 50.00th=[ 659], 60.00th=[ 835], 00:17:06.781 | 70.00th=[ 4245], 80.00th=[10805], 90.00th=[10939], 95.00th=[10939], 00:17:06.781 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:17:06.781 | 99.99th=[10939] 00:17:06.781 bw ( KiB/s): min= 1450, max=407552, per=2.47%, avg=80821.25, stdev=142938.41, samples=8 00:17:06.781 iops : min= 1, max= 398, avg=78.88, stdev=139.62, samples=8 00:17:06.781 lat (msec) : 250=0.90%, 500=41.99%, 750=11.74%, 1000=14.00%, >=2000=31.38% 00:17:06.781 cpu : usr=0.02%, sys=0.92%, ctx=541, majf=0, minf=32769 00:17:06.781 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.8% 00:17:06.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.781 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:06.781 issued rwts: total=443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.781 job1: (groupid=0, jobs=1): err= 0: pid=3024497: Wed Apr 24 17:21:13 2024 00:17:06.781 read: IOPS=47, BW=47.7MiB/s (50.0MB/s)(496MiB/10394msec) 00:17:06.781 slat (usec): min=38, max=2063.1k, avg=20771.98, stdev=160425.88 00:17:06.781 clat (msec): min=88, max=8640, avg=2316.22, stdev=1758.03 00:17:06.781 lat (msec): min=481, max=8711, avg=2337.00, stdev=1771.17 00:17:06.781 clat percentiles (msec): 00:17:06.781 | 1.00th=[ 506], 5.00th=[ 558], 10.00th=[ 600], 20.00th=[ 760], 00:17:06.781 | 30.00th=[ 827], 40.00th=[ 852], 50.00th=[ 860], 60.00th=[ 3473], 00:17:06.781 | 70.00th=[ 3977], 80.00th=[ 4396], 90.00th=[ 4530], 95.00th=[ 4665], 00:17:06.781 | 99.00th=[ 4799], 99.50th=[ 6544], 99.90th=[ 8658], 99.95th=[ 8658], 00:17:06.781 | 99.99th=[ 8658] 00:17:06.781 bw ( KiB/s): min= 8192, max=221184, per=3.28%, avg=107666.29, stdev=79905.75, samples=7 00:17:06.781 iops : min= 8, max= 216, avg=105.14, stdev=78.03, samples=7 00:17:06.781 lat (msec) : 100=0.20%, 500=0.40%, 750=19.15%, 1000=33.87%, >=2000=46.37% 00:17:06.781 cpu : usr=0.00%, sys=1.00%, ctx=624, majf=0, minf=32769 00:17:06.781 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.3% 00:17:06.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.781 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:06.781 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.781 job1: (groupid=0, jobs=1): err= 0: pid=3024498: Wed Apr 24 17:21:13 2024 00:17:06.781 read: IOPS=42, BW=43.0MiB/s (45.1MB/s)(534MiB/12423msec) 00:17:06.781 slat (usec): min=55, max=2086.3k, avg=19320.91, stdev=157084.64 00:17:06.781 clat (msec): min=605, max=9593, avg=2811.10, stdev=3180.89 00:17:06.781 lat (msec): min=606, max=9594, avg=2830.42, stdev=3191.66 00:17:06.781 clat percentiles (msec): 00:17:06.781 | 1.00th=[ 609], 5.00th=[ 625], 10.00th=[ 634], 20.00th=[ 659], 00:17:06.781 | 30.00th=[ 693], 40.00th=[ 827], 50.00th=[ 1036], 60.00th=[ 1053], 00:17:06.781 | 70.00th=[ 4111], 80.00th=[ 5537], 90.00th=[ 9329], 95.00th=[ 9463], 00:17:06.781 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:17:06.781 | 99.99th=[ 9597] 00:17:06.781 bw ( KiB/s): min= 1450, max=210944, per=2.82%, avg=92548.67, stdev=79889.91, samples=9 00:17:06.781 iops : min= 1, max= 206, avg=90.33, stdev=78.08, samples=9 00:17:06.781 lat (msec) : 750=34.83%, 1000=13.86%, 2000=18.91%, >=2000=32.40% 00:17:06.781 cpu : usr=0.00%, sys=0.99%, ctx=479, majf=0, minf=32769 00:17:06.781 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:17:06.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.781 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:06.781 issued rwts: total=534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.781 job1: (groupid=0, jobs=1): err= 0: pid=3024499: Wed Apr 24 17:21:13 2024 00:17:06.781 read: IOPS=2, BW=2972KiB/s (3043kB/s)(36.0MiB/12405msec) 00:17:06.781 slat (usec): min=531, max=2113.2k, avg=285470.78, stdev=697624.15 00:17:06.781 clat (msec): min=2127, max=12402, avg=9446.52, stdev=3204.07 00:17:06.781 lat (msec): min=4240, max=12404, avg=9731.99, stdev=2983.59 00:17:06.781 clat percentiles (msec): 00:17:06.781 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 4245], 20.00th=[ 6409], 00:17:06.781 | 30.00th=[ 8557], 40.00th=[ 8658], 50.00th=[10671], 60.00th=[10671], 00:17:06.781 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:17:06.781 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.781 | 99.99th=[12416] 00:17:06.781 lat (msec) : >=2000=100.00% 00:17:06.781 cpu : usr=0.00%, sys=0.22%, ctx=53, majf=0, minf=9217 00:17:06.781 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:17:06.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.781 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.781 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.781 job1: (groupid=0, jobs=1): err= 0: pid=3024500: Wed Apr 24 17:21:13 2024 00:17:06.781 read: IOPS=3, BW=3787KiB/s (3878kB/s)(46.0MiB/12438msec) 00:17:06.781 slat (usec): min=749, max=2098.0k, avg=224164.88, stdev=630368.21 00:17:06.781 clat (msec): min=2125, max=12435, avg=8838.63, stdev=3274.88 00:17:06.781 lat (msec): min=4223, max=12437, avg=9062.79, stdev=3155.95 00:17:06.781 clat percentiles (msec): 00:17:06.781 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 4245], 20.00th=[ 4329], 00:17:06.781 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[10671], 00:17:06.781 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:17:06.781 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.781 | 99.99th=[12416] 00:17:06.781 lat (msec) : >=2000=100.00% 00:17:06.781 cpu : usr=0.00%, sys=0.29%, ctx=62, majf=0, minf=11777 00:17:06.781 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:17:06.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.781 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.781 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.781 job1: (groupid=0, jobs=1): err= 0: pid=3024501: Wed Apr 24 17:21:13 2024 00:17:06.781 read: IOPS=106, BW=107MiB/s (112MB/s)(1114MiB/10443msec) 00:17:06.781 slat (usec): min=36, max=2052.7k, avg=9285.94, stdev=86439.93 00:17:06.781 clat (msec): min=92, max=6126, avg=1110.20, stdev=1105.14 00:17:06.781 lat (msec): min=249, max=6162, avg=1119.49, stdev=1113.57 00:17:06.781 clat percentiles (msec): 00:17:06.781 | 1.00th=[ 249], 5.00th=[ 251], 10.00th=[ 251], 20.00th=[ 253], 00:17:06.781 | 30.00th=[ 262], 40.00th=[ 502], 50.00th=[ 659], 60.00th=[ 818], 00:17:06.781 | 70.00th=[ 1083], 80.00th=[ 2500], 90.00th=[ 3239], 95.00th=[ 3339], 00:17:06.781 | 99.00th=[ 3440], 99.50th=[ 3440], 99.90th=[ 4396], 99.95th=[ 6141], 00:17:06.781 | 99.99th=[ 6141] 00:17:06.781 bw ( KiB/s): min=12288, max=505856, per=5.60%, avg=183575.27, stdev=144096.19, samples=11 00:17:06.781 iops : min= 12, max= 494, avg=179.27, stdev=140.72, samples=11 00:17:06.781 lat (msec) : 100=0.09%, 250=4.76%, 500=35.01%, 750=13.73%, 1000=14.18% 00:17:06.781 lat (msec) : 2000=9.52%, >=2000=22.71% 00:17:06.781 cpu : usr=0.02%, sys=1.64%, ctx=1361, majf=0, minf=32769 00:17:06.781 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:17:06.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.781 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.781 issued rwts: total=1114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.781 job1: (groupid=0, jobs=1): err= 0: pid=3024502: Wed Apr 24 17:21:13 2024 00:17:06.781 read: IOPS=55, BW=55.7MiB/s (58.5MB/s)(697MiB/12503msec) 00:17:06.781 slat (usec): min=63, max=2130.8k, avg=14880.27, stdev=137195.34 00:17:06.781 clat (msec): min=392, max=8945, avg=2152.11, stdev=3036.83 00:17:06.781 lat (msec): min=394, max=8949, avg=2166.99, stdev=3045.47 00:17:06.781 clat percentiles (msec): 00:17:06.781 | 1.00th=[ 393], 5.00th=[ 401], 10.00th=[ 409], 20.00th=[ 531], 00:17:06.781 | 30.00th=[ 617], 40.00th=[ 642], 50.00th=[ 659], 60.00th=[ 827], 00:17:06.781 | 70.00th=[ 1053], 80.00th=[ 2123], 90.00th=[ 8792], 95.00th=[ 8792], 00:17:06.781 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:17:06.782 | 99.99th=[ 8926] 00:17:06.782 bw ( KiB/s): min= 1954, max=265708, per=3.95%, avg=129637.11, stdev=109369.59, samples=9 00:17:06.782 iops : min= 1, max= 259, avg=126.44, stdev=106.86, samples=9 00:17:06.782 lat (msec) : 500=18.94%, 750=37.59%, 1000=10.33%, 2000=13.06%, >=2000=20.09% 00:17:06.782 cpu : usr=0.05%, sys=1.35%, ctx=642, majf=0, minf=32769 00:17:06.782 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:17:06.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.782 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:06.782 issued rwts: total=697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.782 job1: (groupid=0, jobs=1): err= 0: pid=3024503: Wed Apr 24 17:21:13 2024 00:17:06.782 read: IOPS=62, BW=62.3MiB/s (65.3MB/s)(779MiB/12502msec) 00:17:06.782 slat (usec): min=45, max=2156.4k, avg=13311.04, stdev=131060.79 00:17:06.782 clat (msec): min=507, max=9170, avg=1976.50, stdev=3009.38 00:17:06.782 lat (msec): min=510, max=9174, avg=1989.81, stdev=3018.62 00:17:06.782 clat percentiles (msec): 00:17:06.782 | 1.00th=[ 510], 5.00th=[ 531], 10.00th=[ 558], 20.00th=[ 592], 00:17:06.782 | 30.00th=[ 609], 40.00th=[ 617], 50.00th=[ 642], 60.00th=[ 659], 00:17:06.782 | 70.00th=[ 709], 80.00th=[ 810], 90.00th=[ 8792], 95.00th=[ 9060], 00:17:06.782 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:17:06.782 | 99.99th=[ 9194] 00:17:06.782 bw ( KiB/s): min= 2003, max=249856, per=4.07%, avg=133525.10, stdev=99609.52, samples=10 00:17:06.782 iops : min= 1, max= 244, avg=130.30, stdev=97.42, samples=10 00:17:06.782 lat (msec) : 750=75.48%, 1000=7.45%, >=2000=17.07% 00:17:06.782 cpu : usr=0.09%, sys=1.31%, ctx=708, majf=0, minf=32769 00:17:06.782 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.9% 00:17:06.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.782 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:06.782 issued rwts: total=779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.782 job1: (groupid=0, jobs=1): err= 0: pid=3024504: Wed Apr 24 17:21:13 2024 00:17:06.782 read: IOPS=13, BW=13.4MiB/s (14.0MB/s)(167MiB/12482msec) 00:17:06.782 slat (usec): min=355, max=2102.7k, avg=61831.45, stdev=318349.46 00:17:06.782 clat (msec): min=910, max=12375, avg=9204.85, stdev=4166.12 00:17:06.782 lat (msec): min=911, max=12414, avg=9266.69, stdev=4134.01 00:17:06.782 clat percentiles (msec): 00:17:06.782 | 1.00th=[ 927], 5.00th=[ 1133], 10.00th=[ 1301], 20.00th=[ 4279], 00:17:06.782 | 30.00th=[ 8557], 40.00th=[11208], 50.00th=[11610], 60.00th=[11879], 00:17:06.782 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:17:06.782 | 99.00th=[12147], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.782 | 99.99th=[12416] 00:17:06.782 bw ( KiB/s): min= 2052, max=34816, per=0.36%, avg=11703.43, stdev=10946.43, samples=7 00:17:06.782 iops : min= 2, max= 34, avg=11.43, stdev=10.69, samples=7 00:17:06.782 lat (msec) : 1000=2.40%, 2000=14.37%, >=2000=83.23% 00:17:06.782 cpu : usr=0.00%, sys=0.62%, ctx=341, majf=0, minf=32769 00:17:06.782 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.6%, 32=19.2%, >=64=62.3% 00:17:06.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.782 complete : 0=0.0%, 4=97.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.4% 00:17:06.782 issued rwts: total=167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.782 job1: (groupid=0, jobs=1): err= 0: pid=3024505: Wed Apr 24 17:21:13 2024 00:17:06.782 read: IOPS=86, BW=86.4MiB/s (90.6MB/s)(900MiB/10412msec) 00:17:06.782 slat (usec): min=42, max=2069.8k, avg=11458.31, stdev=97642.00 00:17:06.782 clat (msec): min=92, max=4943, avg=1406.98, stdev=1349.39 00:17:06.782 lat (msec): min=248, max=4945, avg=1418.44, stdev=1352.25 00:17:06.782 clat percentiles (msec): 00:17:06.782 | 1.00th=[ 249], 5.00th=[ 266], 10.00th=[ 309], 20.00th=[ 567], 00:17:06.782 | 30.00th=[ 676], 40.00th=[ 718], 50.00th=[ 902], 60.00th=[ 1250], 00:17:06.782 | 70.00th=[ 1401], 80.00th=[ 1485], 90.00th=[ 4463], 95.00th=[ 4665], 00:17:06.782 | 99.00th=[ 4866], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 4933], 00:17:06.782 | 99.99th=[ 4933] 00:17:06.782 bw ( KiB/s): min= 6144, max=299008, per=3.71%, avg=121603.62, stdev=80596.88, samples=13 00:17:06.782 iops : min= 6, max= 292, avg=118.69, stdev=78.72, samples=13 00:17:06.782 lat (msec) : 100=0.11%, 250=0.89%, 500=15.67%, 750=28.78%, 1000=5.89% 00:17:06.782 lat (msec) : 2000=34.22%, >=2000=14.44% 00:17:06.782 cpu : usr=0.05%, sys=1.43%, ctx=1526, majf=0, minf=32769 00:17:06.782 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:17:06.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.782 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.782 issued rwts: total=900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.782 job1: (groupid=0, jobs=1): err= 0: pid=3024506: Wed Apr 24 17:21:13 2024 00:17:06.782 read: IOPS=48, BW=48.1MiB/s (50.4MB/s)(597MiB/12419msec) 00:17:06.782 slat (usec): min=54, max=2115.8k, avg=17287.42, stdev=163014.71 00:17:06.782 clat (msec): min=117, max=6709, avg=1583.45, stdev=1718.35 00:17:06.782 lat (msec): min=118, max=8484, avg=1600.73, stdev=1751.71 00:17:06.782 clat percentiles (msec): 00:17:06.782 | 1.00th=[ 117], 5.00th=[ 118], 10.00th=[ 118], 20.00th=[ 146], 00:17:06.782 | 30.00th=[ 207], 40.00th=[ 351], 50.00th=[ 380], 60.00th=[ 1854], 00:17:06.782 | 70.00th=[ 1921], 80.00th=[ 3675], 90.00th=[ 3742], 95.00th=[ 4665], 00:17:06.782 | 99.00th=[ 6342], 99.50th=[ 6409], 99.90th=[ 6678], 99.95th=[ 6678], 00:17:06.782 | 99.99th=[ 6678] 00:17:06.782 bw ( KiB/s): min= 1450, max=499712, per=5.87%, avg=192392.40, stdev=194150.99, samples=5 00:17:06.782 iops : min= 1, max= 488, avg=187.80, stdev=189.70, samples=5 00:17:06.782 lat (msec) : 250=37.19%, 500=16.25%, 2000=20.27%, >=2000=26.30% 00:17:06.782 cpu : usr=0.02%, sys=0.94%, ctx=551, majf=0, minf=32769 00:17:06.782 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:17:06.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.782 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:06.782 issued rwts: total=597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.782 job2: (groupid=0, jobs=1): err= 0: pid=3024507: Wed Apr 24 17:21:13 2024 00:17:06.782 read: IOPS=2, BW=2546KiB/s (2607kB/s)(31.0MiB/12467msec) 00:17:06.782 slat (usec): min=1282, max=2145.1k, avg=334566.35, stdev=748705.39 00:17:06.782 clat (msec): min=2095, max=12463, avg=9679.98, stdev=3480.56 00:17:06.782 lat (msec): min=4172, max=12466, avg=10014.55, stdev=3215.58 00:17:06.782 clat percentiles (msec): 00:17:06.782 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6342], 00:17:06.782 | 30.00th=[ 6409], 40.00th=[10671], 50.00th=[12281], 60.00th=[12416], 00:17:06.782 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:17:06.782 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.782 | 99.99th=[12416] 00:17:06.782 lat (msec) : >=2000=100.00% 00:17:06.782 cpu : usr=0.00%, sys=0.18%, ctx=85, majf=0, minf=7937 00:17:06.782 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:17:06.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.782 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:06.782 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.782 job2: (groupid=0, jobs=1): err= 0: pid=3024508: Wed Apr 24 17:21:13 2024 00:17:06.782 read: IOPS=7, BW=7680KiB/s (7864kB/s)(93.0MiB/12400msec) 00:17:06.782 slat (usec): min=366, max=3568.8k, avg=110695.27, stdev=516213.11 00:17:06.782 clat (msec): min=2104, max=12387, avg=10768.46, stdev=2733.76 00:17:06.782 lat (msec): min=4173, max=12399, avg=10879.15, stdev=2583.40 00:17:06.782 clat percentiles (msec): 00:17:06.782 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8490], 00:17:06.782 | 30.00th=[12147], 40.00th=[12147], 50.00th=[12147], 60.00th=[12147], 00:17:06.782 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12416], 00:17:06.782 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.782 | 99.99th=[12416] 00:17:06.782 lat (msec) : >=2000=100.00% 00:17:06.782 cpu : usr=0.00%, sys=0.48%, ctx=90, majf=0, minf=23809 00:17:06.782 IO depths : 1=1.1%, 2=2.2%, 4=4.3%, 8=8.6%, 16=17.2%, 32=34.4%, >=64=32.3% 00:17:06.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.782 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:06.782 issued rwts: total=93,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.782 job2: (groupid=0, jobs=1): err= 0: pid=3024509: Wed Apr 24 17:21:13 2024 00:17:06.782 read: IOPS=21, BW=21.1MiB/s (22.2MB/s)(264MiB/12486msec) 00:17:06.782 slat (usec): min=44, max=3789.0k, avg=39347.47, stdev=296754.79 00:17:06.782 clat (msec): min=840, max=12323, avg=4556.86, stdev=3003.93 00:17:06.782 lat (msec): min=840, max=12367, avg=4596.20, stdev=3032.55 00:17:06.782 clat percentiles (msec): 00:17:06.782 | 1.00th=[ 844], 5.00th=[ 852], 10.00th=[ 852], 20.00th=[ 852], 00:17:06.782 | 30.00th=[ 860], 40.00th=[ 4212], 50.00th=[ 4665], 60.00th=[ 7148], 00:17:06.782 | 70.00th=[ 7349], 80.00th=[ 7550], 90.00th=[ 7684], 95.00th=[ 7752], 00:17:06.782 | 99.00th=[ 8490], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:17:06.782 | 99.99th=[12281] 00:17:06.782 bw ( KiB/s): min= 1868, max=135168, per=1.71%, avg=56079.20, stdev=64290.95, samples=5 00:17:06.782 iops : min= 1, max= 132, avg=54.60, stdev=62.96, samples=5 00:17:06.782 lat (msec) : 1000=33.33%, 2000=2.27%, >=2000=64.39% 00:17:06.782 cpu : usr=0.02%, sys=0.70%, ctx=256, majf=0, minf=32769 00:17:06.782 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.0%, 16=6.1%, 32=12.1%, >=64=76.1% 00:17:06.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.782 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:17:06.782 issued rwts: total=264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.782 job2: (groupid=0, jobs=1): err= 0: pid=3024510: Wed Apr 24 17:21:13 2024 00:17:06.782 read: IOPS=301, BW=301MiB/s (316MB/s)(3753MiB/12451msec) 00:17:06.782 slat (usec): min=37, max=3776.5k, avg=2747.05, stdev=70346.43 00:17:06.782 clat (msec): min=118, max=6093, avg=291.20, stdev=783.06 00:17:06.783 lat (msec): min=119, max=6106, avg=293.95, stdev=790.38 00:17:06.783 clat percentiles (msec): 00:17:06.783 | 1.00th=[ 120], 5.00th=[ 121], 10.00th=[ 121], 20.00th=[ 122], 00:17:06.783 | 30.00th=[ 124], 40.00th=[ 125], 50.00th=[ 125], 60.00th=[ 126], 00:17:06.783 | 70.00th=[ 126], 80.00th=[ 127], 90.00th=[ 243], 95.00th=[ 288], 00:17:06.783 | 99.00th=[ 4396], 99.50th=[ 4396], 99.90th=[ 4463], 99.95th=[ 4463], 00:17:06.783 | 99.99th=[ 6074] 00:17:06.783 bw ( KiB/s): min= 1450, max=1046528, per=22.67%, avg=743034.00, stdev=369569.93, samples=10 00:17:06.783 iops : min= 1, max= 1022, avg=725.40, stdev=361.03, samples=10 00:17:06.783 lat (msec) : 250=93.66%, 500=2.72%, >=2000=3.62% 00:17:06.783 cpu : usr=0.04%, sys=2.18%, ctx=3587, majf=0, minf=32769 00:17:06.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:17:06.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.783 issued rwts: total=3753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.783 job2: (groupid=0, jobs=1): err= 0: pid=3024511: Wed Apr 24 17:21:13 2024 00:17:06.783 read: IOPS=16, BW=16.1MiB/s (16.9MB/s)(200MiB/12436msec) 00:17:06.783 slat (usec): min=116, max=2080.3k, avg=51511.86, stdev=296172.09 00:17:06.783 clat (msec): min=616, max=12422, avg=7661.63, stdev=4098.63 00:17:06.783 lat (msec): min=617, max=12423, avg=7713.14, stdev=4089.77 00:17:06.783 clat percentiles (msec): 00:17:06.783 | 1.00th=[ 617], 5.00th=[ 625], 10.00th=[ 743], 20.00th=[ 3641], 00:17:06.783 | 30.00th=[ 4279], 40.00th=[ 6409], 50.00th=[ 7752], 60.00th=[10671], 00:17:06.783 | 70.00th=[11745], 80.00th=[11745], 90.00th=[11879], 95.00th=[12281], 00:17:06.783 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.783 | 99.99th=[12416] 00:17:06.783 bw ( KiB/s): min= 1450, max=51200, per=0.65%, avg=21272.29, stdev=19702.24, samples=7 00:17:06.783 iops : min= 1, max= 50, avg=20.71, stdev=19.31, samples=7 00:17:06.783 lat (msec) : 750=11.00%, 2000=4.00%, >=2000=85.00% 00:17:06.783 cpu : usr=0.00%, sys=0.75%, ctx=138, majf=0, minf=32769 00:17:06.783 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=16.0%, >=64=68.5% 00:17:06.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.783 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:17:06.783 issued rwts: total=200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.783 job2: (groupid=0, jobs=1): err= 0: pid=3024512: Wed Apr 24 17:21:13 2024 00:17:06.783 read: IOPS=2, BW=2065KiB/s (2114kB/s)(25.0MiB/12399msec) 00:17:06.783 slat (usec): min=760, max=2113.2k, avg=411173.99, stdev=808648.72 00:17:06.783 clat (msec): min=2118, max=12394, avg=9516.23, stdev=3349.58 00:17:06.783 lat (msec): min=4231, max=12397, avg=9927.40, stdev=3018.16 00:17:06.783 clat percentiles (msec): 00:17:06.783 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6342], 00:17:06.783 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12281], 00:17:06.783 | 70.00th=[12281], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:17:06.783 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.783 | 99.99th=[12416] 00:17:06.783 lat (msec) : >=2000=100.00% 00:17:06.783 cpu : usr=0.00%, sys=0.16%, ctx=59, majf=0, minf=6401 00:17:06.783 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:17:06.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.783 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:06.783 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.783 job2: (groupid=0, jobs=1): err= 0: pid=3024513: Wed Apr 24 17:21:13 2024 00:17:06.783 read: IOPS=2, BW=3042KiB/s (3115kB/s)(37.0MiB/12453msec) 00:17:06.783 slat (usec): min=728, max=3786.5k, avg=279622.44, stdev=823026.39 00:17:06.783 clat (msec): min=2106, max=12449, avg=9016.51, stdev=3609.29 00:17:06.783 lat (msec): min=4177, max=12452, avg=9296.13, stdev=3456.62 00:17:06.783 clat percentiles (msec): 00:17:06.783 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4245], 00:17:06.783 | 30.00th=[ 6342], 40.00th=[ 6342], 50.00th=[ 8490], 60.00th=[12416], 00:17:06.783 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:17:06.783 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.783 | 99.99th=[12416] 00:17:06.783 lat (msec) : >=2000=100.00% 00:17:06.783 cpu : usr=0.01%, sys=0.27%, ctx=68, majf=0, minf=9473 00:17:06.783 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:17:06.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.783 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.783 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.783 job2: (groupid=0, jobs=1): err= 0: pid=3024514: Wed Apr 24 17:21:13 2024 00:17:06.783 read: IOPS=48, BW=48.9MiB/s (51.2MB/s)(606MiB/12399msec) 00:17:06.783 slat (usec): min=41, max=2111.3k, avg=16984.60, stdev=163774.90 00:17:06.783 clat (msec): min=219, max=8547, avg=990.96, stdev=1431.86 00:17:06.783 lat (msec): min=220, max=8568, avg=1007.95, stdev=1474.81 00:17:06.783 clat percentiles (msec): 00:17:06.783 | 1.00th=[ 224], 5.00th=[ 230], 10.00th=[ 232], 20.00th=[ 234], 00:17:06.783 | 30.00th=[ 236], 40.00th=[ 239], 50.00th=[ 284], 60.00th=[ 347], 00:17:06.783 | 70.00th=[ 368], 80.00th=[ 2836], 90.00th=[ 2970], 95.00th=[ 3037], 00:17:06.783 | 99.00th=[ 6812], 99.50th=[ 8490], 99.90th=[ 8557], 99.95th=[ 8557], 00:17:06.783 | 99.99th=[ 8557] 00:17:06.783 bw ( KiB/s): min= 1450, max=538624, per=7.46%, avg=244561.50, stdev=251919.30, samples=4 00:17:06.783 iops : min= 1, max= 526, avg=238.50, stdev=246.36, samples=4 00:17:06.783 lat (msec) : 250=46.04%, 500=30.53%, 1000=0.17%, >=2000=23.27% 00:17:06.783 cpu : usr=0.02%, sys=0.73%, ctx=573, majf=0, minf=32769 00:17:06.783 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:17:06.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.783 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:06.783 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.783 job2: (groupid=0, jobs=1): err= 0: pid=3024515: Wed Apr 24 17:21:13 2024 00:17:06.783 read: IOPS=4, BW=4518KiB/s (4626kB/s)(55.0MiB/12466msec) 00:17:06.783 slat (usec): min=630, max=2100.9k, avg=188123.35, stdev=575365.07 00:17:06.783 clat (msec): min=2118, max=12464, avg=10940.08, stdev=2817.30 00:17:06.783 lat (msec): min=4219, max=12465, avg=11128.21, stdev=2550.13 00:17:06.783 clat percentiles (msec): 00:17:06.783 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[ 8557], 00:17:06.783 | 30.00th=[12281], 40.00th=[12416], 50.00th=[12416], 60.00th=[12416], 00:17:06.783 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:17:06.783 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.783 | 99.99th=[12416] 00:17:06.783 lat (msec) : >=2000=100.00% 00:17:06.783 cpu : usr=0.00%, sys=0.39%, ctx=81, majf=0, minf=14081 00:17:06.783 IO depths : 1=1.8%, 2=3.6%, 4=7.3%, 8=14.5%, 16=29.1%, 32=43.6%, >=64=0.0% 00:17:06.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.783 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.783 issued rwts: total=55,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.783 job2: (groupid=0, jobs=1): err= 0: pid=3024516: Wed Apr 24 17:21:13 2024 00:17:06.783 read: IOPS=2, BW=2455KiB/s (2514kB/s)(30.0MiB/12511msec) 00:17:06.783 slat (usec): min=749, max=2157.5k, avg=346739.17, stdev=764951.92 00:17:06.783 clat (msec): min=2108, max=12509, avg=10991.33, stdev=2939.75 00:17:06.783 lat (msec): min=4219, max=12510, avg=11338.07, stdev=2424.15 00:17:06.783 clat percentiles (msec): 00:17:06.783 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 8658], 00:17:06.783 | 30.00th=[12281], 40.00th=[12416], 50.00th=[12416], 60.00th=[12550], 00:17:06.783 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12550], 95.00th=[12550], 00:17:06.783 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:17:06.783 | 99.99th=[12550] 00:17:06.783 lat (msec) : >=2000=100.00% 00:17:06.783 cpu : usr=0.00%, sys=0.24%, ctx=74, majf=0, minf=7681 00:17:06.783 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:17:06.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.783 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:06.783 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.783 job2: (groupid=0, jobs=1): err= 0: pid=3024517: Wed Apr 24 17:21:13 2024 00:17:06.783 read: IOPS=1, BW=1990KiB/s (2038kB/s)(24.0MiB/12350msec) 00:17:06.783 slat (usec): min=586, max=3781.2k, avg=427056.70, stdev=995981.41 00:17:06.783 clat (msec): min=2100, max=12318, avg=6601.67, stdev=2342.33 00:17:06.783 lat (msec): min=4166, max=12349, avg=7028.73, stdev=2419.00 00:17:06.783 clat percentiles (msec): 00:17:06.783 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 4178], 00:17:06.783 | 30.00th=[ 4245], 40.00th=[ 6342], 50.00th=[ 6342], 60.00th=[ 8490], 00:17:06.783 | 70.00th=[ 8490], 80.00th=[ 8490], 90.00th=[ 8557], 95.00th=[ 8557], 00:17:06.783 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:17:06.783 | 99.99th=[12281] 00:17:06.783 lat (msec) : >=2000=100.00% 00:17:06.783 cpu : usr=0.00%, sys=0.15%, ctx=52, majf=0, minf=6145 00:17:06.783 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:17:06.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.783 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:06.783 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.783 job2: (groupid=0, jobs=1): err= 0: pid=3024518: Wed Apr 24 17:21:13 2024 00:17:06.783 read: IOPS=1, BW=1241KiB/s (1271kB/s)(15.0MiB/12379msec) 00:17:06.783 slat (msec): min=4, max=2131, avg=685.04, stdev=980.99 00:17:06.783 clat (msec): min=2102, max=12373, avg=8818.12, stdev=3593.07 00:17:06.783 lat (msec): min=4196, max=12378, avg=9503.16, stdev=3176.66 00:17:06.783 clat percentiles (msec): 00:17:06.783 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 4212], 20.00th=[ 4212], 00:17:06.783 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:17:06.783 | 70.00th=[10671], 80.00th=[12281], 90.00th=[12416], 95.00th=[12416], 00:17:06.784 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.784 | 99.99th=[12416] 00:17:06.784 lat (msec) : >=2000=100.00% 00:17:06.784 cpu : usr=0.00%, sys=0.09%, ctx=57, majf=0, minf=3841 00:17:06.784 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:06.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.784 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.784 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.784 job2: (groupid=0, jobs=1): err= 0: pid=3024519: Wed Apr 24 17:21:13 2024 00:17:06.784 read: IOPS=135, BW=136MiB/s (142MB/s)(1682MiB/12407msec) 00:17:06.784 slat (usec): min=40, max=2068.1k, avg=6125.74, stdev=52153.64 00:17:06.784 clat (msec): min=362, max=4552, avg=910.68, stdev=1007.89 00:17:06.784 lat (msec): min=363, max=4580, avg=916.80, stdev=1010.38 00:17:06.784 clat percentiles (msec): 00:17:06.784 | 1.00th=[ 363], 5.00th=[ 368], 10.00th=[ 368], 20.00th=[ 447], 00:17:06.784 | 30.00th=[ 493], 40.00th=[ 542], 50.00th=[ 651], 60.00th=[ 726], 00:17:06.784 | 70.00th=[ 793], 80.00th=[ 860], 90.00th=[ 1020], 95.00th=[ 4329], 00:17:06.784 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:17:06.784 | 99.99th=[ 4530] 00:17:06.784 bw ( KiB/s): min= 1450, max=347465, per=5.71%, avg=187182.06, stdev=88035.14, samples=17 00:17:06.784 iops : min= 1, max= 339, avg=182.65, stdev=85.97, samples=17 00:17:06.784 lat (msec) : 500=34.60%, 750=29.13%, 1000=25.45%, 2000=3.21%, >=2000=7.61% 00:17:06.784 cpu : usr=0.02%, sys=1.63%, ctx=1540, majf=0, minf=32769 00:17:06.784 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.3% 00:17:06.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.784 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.784 issued rwts: total=1682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.784 job3: (groupid=0, jobs=1): err= 0: pid=3024520: Wed Apr 24 17:21:13 2024 00:17:06.784 read: IOPS=261, BW=262MiB/s (274MB/s)(2620MiB/10012msec) 00:17:06.784 slat (usec): min=39, max=90768, avg=3812.25, stdev=10724.45 00:17:06.784 clat (msec): min=11, max=1390, avg=446.33, stdev=278.37 00:17:06.784 lat (msec): min=12, max=1391, avg=450.14, stdev=280.92 00:17:06.784 clat percentiles (msec): 00:17:06.784 | 1.00th=[ 34], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 161], 00:17:06.784 | 30.00th=[ 245], 40.00th=[ 368], 50.00th=[ 388], 60.00th=[ 493], 00:17:06.784 | 70.00th=[ 558], 80.00th=[ 726], 90.00th=[ 860], 95.00th=[ 944], 00:17:06.784 | 99.00th=[ 1083], 99.50th=[ 1234], 99.90th=[ 1385], 99.95th=[ 1385], 00:17:06.784 | 99.99th=[ 1385] 00:17:06.784 bw ( KiB/s): min=34816, max=536576, per=7.26%, avg=237868.53, stdev=122015.07, samples=17 00:17:06.784 iops : min= 34, max= 524, avg=232.24, stdev=119.15, samples=17 00:17:06.784 lat (msec) : 20=0.38%, 50=1.34%, 100=2.10%, 250=27.56%, 500=30.92% 00:17:06.784 lat (msec) : 750=19.58%, 1000=14.66%, 2000=3.47% 00:17:06.784 cpu : usr=0.10%, sys=2.71%, ctx=2618, majf=0, minf=32769 00:17:06.784 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:17:06.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.784 issued rwts: total=2620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.784 job3: (groupid=0, jobs=1): err= 0: pid=3024521: Wed Apr 24 17:21:13 2024 00:17:06.784 read: IOPS=57, BW=57.7MiB/s (60.5MB/s)(714MiB/12380msec) 00:17:06.784 slat (usec): min=39, max=2100.6k, avg=14407.81, stdev=148616.26 00:17:06.784 clat (msec): min=119, max=10571, avg=1918.24, stdev=3532.05 00:17:06.784 lat (msec): min=120, max=10686, avg=1932.65, stdev=3543.40 00:17:06.784 clat percentiles (msec): 00:17:06.784 | 1.00th=[ 123], 5.00th=[ 124], 10.00th=[ 125], 20.00th=[ 140], 00:17:06.784 | 30.00th=[ 176], 40.00th=[ 213], 50.00th=[ 245], 60.00th=[ 247], 00:17:06.784 | 70.00th=[ 249], 80.00th=[ 1905], 90.00th=[ 9597], 95.00th=[ 9597], 00:17:06.784 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[10537], 99.95th=[10537], 00:17:06.784 | 99.99th=[10537] 00:17:06.784 bw ( KiB/s): min= 1517, max=759808, per=4.58%, avg=150205.62, stdev=284893.41, samples=8 00:17:06.784 iops : min= 1, max= 742, avg=146.62, stdev=278.25, samples=8 00:17:06.784 lat (msec) : 250=71.01%, 500=7.00%, 2000=3.36%, >=2000=18.63% 00:17:06.784 cpu : usr=0.00%, sys=0.88%, ctx=716, majf=0, minf=32769 00:17:06.784 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:17:06.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.784 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:06.784 issued rwts: total=714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.784 job3: (groupid=0, jobs=1): err= 0: pid=3024522: Wed Apr 24 17:21:13 2024 00:17:06.784 read: IOPS=11, BW=11.2MiB/s (11.8MB/s)(139MiB/12394msec) 00:17:06.784 slat (usec): min=381, max=2120.6k, avg=74018.96, stdev=337614.50 00:17:06.784 clat (msec): min=2104, max=12063, avg=5881.84, stdev=3207.97 00:17:06.784 lat (msec): min=3607, max=12066, avg=5955.86, stdev=3236.45 00:17:06.784 clat percentiles (msec): 00:17:06.784 | 1.00th=[ 3608], 5.00th=[ 3641], 10.00th=[ 3675], 20.00th=[ 3742], 00:17:06.784 | 30.00th=[ 3809], 40.00th=[ 3876], 50.00th=[ 3943], 60.00th=[ 4077], 00:17:06.784 | 70.00th=[ 6409], 80.00th=[10537], 90.00th=[11745], 95.00th=[11879], 00:17:06.784 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:06.784 | 99.99th=[12013] 00:17:06.784 bw ( KiB/s): min= 1450, max=22528, per=0.37%, avg=11989.00, stdev=14904.40, samples=2 00:17:06.784 iops : min= 1, max= 22, avg=11.50, stdev=14.85, samples=2 00:17:06.784 lat (msec) : >=2000=100.00% 00:17:06.784 cpu : usr=0.00%, sys=0.49%, ctx=344, majf=0, minf=32769 00:17:06.784 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.8%, 16=11.5%, 32=23.0%, >=64=54.7% 00:17:06.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.784 complete : 0=0.0%, 4=92.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=7.7% 00:17:06.784 issued rwts: total=139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.784 job3: (groupid=0, jobs=1): err= 0: pid=3024523: Wed Apr 24 17:21:13 2024 00:17:06.784 read: IOPS=4, BW=4895KiB/s (5013kB/s)(50.0MiB/10459msec) 00:17:06.784 slat (usec): min=730, max=2090.0k, avg=206961.42, stdev=580763.67 00:17:06.784 clat (msec): min=110, max=10456, avg=8484.42, stdev=3001.57 00:17:06.784 lat (msec): min=2144, max=10458, avg=8691.38, stdev=2759.35 00:17:06.784 clat percentiles (msec): 00:17:06.784 | 1.00th=[ 111], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 6477], 00:17:06.784 | 30.00th=[ 8557], 40.00th=[10000], 50.00th=[10268], 60.00th=[10402], 00:17:06.784 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:17:06.784 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:17:06.784 | 99.99th=[10402] 00:17:06.784 lat (msec) : 250=2.00%, >=2000=98.00% 00:17:06.784 cpu : usr=0.00%, sys=0.39%, ctx=177, majf=0, minf=12801 00:17:06.784 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:17:06.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.784 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.784 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.784 job3: (groupid=0, jobs=1): err= 0: pid=3024524: Wed Apr 24 17:21:13 2024 00:17:06.784 read: IOPS=9, BW=9.85MiB/s (10.3MB/s)(123MiB/12488msec) 00:17:06.784 slat (usec): min=584, max=2124.5k, avg=84528.31, stdev=381622.34 00:17:06.784 clat (msec): min=2090, max=12486, avg=9730.72, stdev=3738.49 00:17:06.784 lat (msec): min=3975, max=12487, avg=9815.25, stdev=3681.43 00:17:06.784 clat percentiles (msec): 00:17:06.784 | 1.00th=[ 3977], 5.00th=[ 3977], 10.00th=[ 4010], 20.00th=[ 4144], 00:17:06.784 | 30.00th=[ 6342], 40.00th=[12147], 50.00th=[12281], 60.00th=[12416], 00:17:06.784 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:17:06.784 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:17:06.784 | 99.99th=[12550] 00:17:06.784 lat (msec) : >=2000=100.00% 00:17:06.784 cpu : usr=0.00%, sys=0.73%, ctx=223, majf=0, minf=31489 00:17:06.784 IO depths : 1=0.8%, 2=1.6%, 4=3.3%, 8=6.5%, 16=13.0%, 32=26.0%, >=64=48.8% 00:17:06.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.785 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:06.785 issued rwts: total=123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.785 job3: (groupid=0, jobs=1): err= 0: pid=3024525: Wed Apr 24 17:21:13 2024 00:17:06.785 read: IOPS=2, BW=3060KiB/s (3133kB/s)(37.0MiB/12383msec) 00:17:06.785 slat (usec): min=362, max=2119.6k, avg=277739.86, stdev=676850.55 00:17:06.785 clat (msec): min=2105, max=12366, avg=10507.04, stdev=2877.32 00:17:06.785 lat (msec): min=4205, max=12382, avg=10784.78, stdev=2517.26 00:17:06.785 clat percentiles (msec): 00:17:06.785 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 8490], 00:17:06.785 | 30.00th=[10671], 40.00th=[12013], 50.00th=[12013], 60.00th=[12147], 00:17:06.785 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12416], 00:17:06.785 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.785 | 99.99th=[12416] 00:17:06.785 lat (msec) : >=2000=100.00% 00:17:06.785 cpu : usr=0.00%, sys=0.19%, ctx=113, majf=0, minf=9473 00:17:06.785 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:17:06.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.785 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.785 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.785 job3: (groupid=0, jobs=1): err= 0: pid=3024526: Wed Apr 24 17:21:13 2024 00:17:06.785 read: IOPS=41, BW=41.8MiB/s (43.8MB/s)(518MiB/12401msec) 00:17:06.785 slat (usec): min=44, max=2116.5k, avg=19899.97, stdev=157018.83 00:17:06.785 clat (msec): min=502, max=10572, avg=2704.73, stdev=3169.08 00:17:06.785 lat (msec): min=503, max=10577, avg=2724.63, stdev=3177.94 00:17:06.785 clat percentiles (msec): 00:17:06.785 | 1.00th=[ 502], 5.00th=[ 510], 10.00th=[ 527], 20.00th=[ 558], 00:17:06.785 | 30.00th=[ 600], 40.00th=[ 609], 50.00th=[ 634], 60.00th=[ 701], 00:17:06.785 | 70.00th=[ 2869], 80.00th=[ 6141], 90.00th=[ 8792], 95.00th=[ 8926], 00:17:06.785 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[10537], 99.95th=[10537], 00:17:06.785 | 99.99th=[10537] 00:17:06.785 bw ( KiB/s): min= 1450, max=251904, per=2.71%, avg=88907.78, stdev=97029.30, samples=9 00:17:06.785 iops : min= 1, max= 246, avg=86.78, stdev=94.80, samples=9 00:17:06.785 lat (msec) : 750=60.62%, >=2000=39.38% 00:17:06.785 cpu : usr=0.01%, sys=0.76%, ctx=657, majf=0, minf=32769 00:17:06.785 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.2%, >=64=87.8% 00:17:06.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.785 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:06.785 issued rwts: total=518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.785 job3: (groupid=0, jobs=1): err= 0: pid=3024527: Wed Apr 24 17:21:13 2024 00:17:06.785 read: IOPS=219, BW=219MiB/s (230MB/s)(2252MiB/10268msec) 00:17:06.785 slat (usec): min=53, max=2029.8k, avg=4437.92, stdev=44315.15 00:17:06.785 clat (msec): min=112, max=2668, avg=552.16, stdev=550.03 00:17:06.785 lat (msec): min=113, max=4212, avg=556.59, stdev=555.45 00:17:06.785 clat percentiles (msec): 00:17:06.785 | 1.00th=[ 114], 5.00th=[ 144], 10.00th=[ 180], 20.00th=[ 213], 00:17:06.785 | 30.00th=[ 226], 40.00th=[ 326], 50.00th=[ 384], 60.00th=[ 510], 00:17:06.785 | 70.00th=[ 617], 80.00th=[ 659], 90.00th=[ 1028], 95.00th=[ 2467], 00:17:06.785 | 99.00th=[ 2635], 99.50th=[ 2635], 99.90th=[ 2668], 99.95th=[ 2668], 00:17:06.785 | 99.99th=[ 2668] 00:17:06.785 bw ( KiB/s): min=18432, max=640252, per=8.83%, avg=289535.73, stdev=181565.80, samples=15 00:17:06.785 iops : min= 18, max= 625, avg=282.73, stdev=177.28, samples=15 00:17:06.785 lat (msec) : 250=33.48%, 500=25.36%, 750=28.20%, 1000=2.40%, 2000=4.57% 00:17:06.785 lat (msec) : >=2000=5.99% 00:17:06.785 cpu : usr=0.05%, sys=2.41%, ctx=1958, majf=0, minf=32769 00:17:06.785 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:17:06.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.785 issued rwts: total=2252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.785 job3: (groupid=0, jobs=1): err= 0: pid=3024528: Wed Apr 24 17:21:13 2024 00:17:06.785 read: IOPS=98, BW=98.9MiB/s (104MB/s)(1222MiB/12360msec) 00:17:06.785 slat (usec): min=34, max=1775.0k, avg=8395.00, stdev=60783.22 00:17:06.785 clat (msec): min=238, max=4530, avg=1068.84, stdev=1067.58 00:17:06.785 lat (msec): min=239, max=4573, avg=1077.24, stdev=1070.87 00:17:06.785 clat percentiles (msec): 00:17:06.785 | 1.00th=[ 243], 5.00th=[ 338], 10.00th=[ 368], 20.00th=[ 372], 00:17:06.785 | 30.00th=[ 397], 40.00th=[ 625], 50.00th=[ 735], 60.00th=[ 885], 00:17:06.785 | 70.00th=[ 1062], 80.00th=[ 1183], 90.00th=[ 3876], 95.00th=[ 4010], 00:17:06.785 | 99.00th=[ 4077], 99.50th=[ 4111], 99.90th=[ 4111], 99.95th=[ 4530], 00:17:06.785 | 99.99th=[ 4530] 00:17:06.785 bw ( KiB/s): min= 1517, max=409600, per=5.26%, avg=172514.54, stdev=125345.74, samples=13 00:17:06.785 iops : min= 1, max= 400, avg=168.38, stdev=122.40, samples=13 00:17:06.785 lat (msec) : 250=1.31%, 500=35.43%, 750=14.98%, 1000=14.81%, 2000=21.77% 00:17:06.785 lat (msec) : >=2000=11.70% 00:17:06.785 cpu : usr=0.00%, sys=1.22%, ctx=1553, majf=0, minf=32769 00:17:06.785 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:17:06.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.785 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.785 issued rwts: total=1222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.785 job3: (groupid=0, jobs=1): err= 0: pid=3024529: Wed Apr 24 17:21:13 2024 00:17:06.785 read: IOPS=24, BW=24.5MiB/s (25.7MB/s)(306MiB/12477msec) 00:17:06.785 slat (usec): min=44, max=2091.2k, avg=33902.14, stdev=212894.72 00:17:06.785 clat (msec): min=690, max=11345, avg=5012.14, stdev=3794.72 00:17:06.785 lat (msec): min=705, max=11351, avg=5046.04, stdev=3806.54 00:17:06.785 clat percentiles (msec): 00:17:06.785 | 1.00th=[ 701], 5.00th=[ 718], 10.00th=[ 735], 20.00th=[ 860], 00:17:06.785 | 30.00th=[ 944], 40.00th=[ 3641], 50.00th=[ 3910], 60.00th=[ 7550], 00:17:06.785 | 70.00th=[ 7684], 80.00th=[ 7752], 90.00th=[11208], 95.00th=[11208], 00:17:06.785 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:17:06.785 | 99.99th=[11342] 00:17:06.785 bw ( KiB/s): min= 1896, max=141312, per=1.24%, avg=40719.11, stdev=50453.83, samples=9 00:17:06.785 iops : min= 1, max= 138, avg=39.67, stdev=49.36, samples=9 00:17:06.785 lat (msec) : 750=11.76%, 1000=20.26%, 2000=1.31%, >=2000=66.67% 00:17:06.785 cpu : usr=0.00%, sys=0.75%, ctx=530, majf=0, minf=32769 00:17:06.785 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.5%, >=64=79.4% 00:17:06.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.785 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:06.785 issued rwts: total=306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.785 job3: (groupid=0, jobs=1): err= 0: pid=3024530: Wed Apr 24 17:21:13 2024 00:17:06.785 read: IOPS=26, BW=26.2MiB/s (27.4MB/s)(327MiB/12492msec) 00:17:06.785 slat (usec): min=82, max=2046.9k, avg=31750.78, stdev=188728.82 00:17:06.785 clat (msec): min=913, max=10973, avg=4701.21, stdev=3309.37 00:17:06.785 lat (msec): min=914, max=10975, avg=4732.96, stdev=3321.69 00:17:06.785 clat percentiles (msec): 00:17:06.785 | 1.00th=[ 944], 5.00th=[ 1020], 10.00th=[ 1053], 20.00th=[ 1070], 00:17:06.785 | 30.00th=[ 1099], 40.00th=[ 3507], 50.00th=[ 5134], 60.00th=[ 5470], 00:17:06.785 | 70.00th=[ 6074], 80.00th=[ 7953], 90.00th=[10671], 95.00th=[10805], 00:17:06.785 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:17:06.785 | 99.99th=[10939] 00:17:06.785 bw ( KiB/s): min= 1868, max=120832, per=1.25%, avg=40918.10, stdev=36630.44, samples=10 00:17:06.785 iops : min= 1, max= 118, avg=39.60, stdev=35.89, samples=10 00:17:06.785 lat (msec) : 1000=4.28%, 2000=29.97%, >=2000=65.75% 00:17:06.785 cpu : usr=0.02%, sys=1.02%, ctx=585, majf=0, minf=32106 00:17:06.785 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.9%, 32=9.8%, >=64=80.7% 00:17:06.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.785 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:06.785 issued rwts: total=327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.785 job3: (groupid=0, jobs=1): err= 0: pid=3024531: Wed Apr 24 17:21:13 2024 00:17:06.785 read: IOPS=30, BW=30.6MiB/s (32.1MB/s)(379MiB/12370msec) 00:17:06.785 slat (usec): min=353, max=2061.5k, avg=27074.05, stdev=142457.16 00:17:06.785 clat (msec): min=895, max=9679, avg=3923.23, stdev=2740.85 00:17:06.785 lat (msec): min=899, max=9684, avg=3950.31, stdev=2752.07 00:17:06.785 clat percentiles (msec): 00:17:06.785 | 1.00th=[ 894], 5.00th=[ 911], 10.00th=[ 936], 20.00th=[ 1083], 00:17:06.785 | 30.00th=[ 1485], 40.00th=[ 1821], 50.00th=[ 4329], 60.00th=[ 4530], 00:17:06.785 | 70.00th=[ 5604], 80.00th=[ 6074], 90.00th=[ 8356], 95.00th=[ 9463], 00:17:06.785 | 99.00th=[ 9597], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:17:06.785 | 99.99th=[ 9731] 00:17:06.785 bw ( KiB/s): min= 1517, max=98304, per=1.21%, avg=39669.23, stdev=34159.64, samples=13 00:17:06.785 iops : min= 1, max= 96, avg=38.69, stdev=33.40, samples=13 00:17:06.785 lat (msec) : 1000=15.57%, 2000=25.59%, >=2000=58.84% 00:17:06.785 cpu : usr=0.02%, sys=0.78%, ctx=969, majf=0, minf=32769 00:17:06.785 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.4% 00:17:06.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.785 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:06.785 issued rwts: total=379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.785 job3: (groupid=0, jobs=1): err= 0: pid=3024532: Wed Apr 24 17:21:13 2024 00:17:06.785 read: IOPS=19, BW=19.7MiB/s (20.7MB/s)(206MiB/10453msec) 00:17:06.786 slat (usec): min=73, max=2082.5k, avg=50286.39, stdev=284653.24 00:17:06.786 clat (msec): min=92, max=9383, avg=5950.53, stdev=3245.04 00:17:06.786 lat (msec): min=1227, max=9387, avg=6000.82, stdev=3222.50 00:17:06.786 clat percentiles (msec): 00:17:06.786 | 1.00th=[ 1200], 5.00th=[ 1267], 10.00th=[ 1385], 20.00th=[ 1502], 00:17:06.786 | 30.00th=[ 3205], 40.00th=[ 5201], 50.00th=[ 7282], 60.00th=[ 8658], 00:17:06.786 | 70.00th=[ 8926], 80.00th=[ 9060], 90.00th=[ 9194], 95.00th=[ 9329], 00:17:06.786 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:17:06.786 | 99.99th=[ 9329] 00:17:06.786 bw ( KiB/s): min= 2043, max=77824, per=0.81%, avg=26614.17, stdev=26355.77, samples=6 00:17:06.786 iops : min= 1, max= 76, avg=25.67, stdev=25.93, samples=6 00:17:06.786 lat (msec) : 100=0.49%, 2000=22.33%, >=2000=77.18% 00:17:06.786 cpu : usr=0.00%, sys=0.98%, ctx=273, majf=0, minf=32769 00:17:06.786 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.9%, 16=7.8%, 32=15.5%, >=64=69.4% 00:17:06.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.786 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:17:06.786 issued rwts: total=206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.786 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.786 job4: (groupid=0, jobs=1): err= 0: pid=3024541: Wed Apr 24 17:21:13 2024 00:17:06.786 read: IOPS=18, BW=18.5MiB/s (19.4MB/s)(193MiB/10417msec) 00:17:06.786 slat (usec): min=412, max=2102.0k, avg=53501.40, stdev=294604.33 00:17:06.786 clat (msec): min=89, max=9705, avg=6413.38, stdev=3534.10 00:17:06.786 lat (msec): min=1208, max=9707, avg=6466.88, stdev=3507.93 00:17:06.786 clat percentiles (msec): 00:17:06.786 | 1.00th=[ 1200], 5.00th=[ 1234], 10.00th=[ 1267], 20.00th=[ 1351], 00:17:06.786 | 30.00th=[ 2232], 40.00th=[ 7550], 50.00th=[ 8792], 60.00th=[ 8926], 00:17:06.786 | 70.00th=[ 9060], 80.00th=[ 9329], 90.00th=[ 9463], 95.00th=[ 9597], 00:17:06.786 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:17:06.786 | 99.99th=[ 9731] 00:17:06.786 bw ( KiB/s): min= 6144, max=65536, per=0.81%, avg=26624.00, stdev=25290.93, samples=5 00:17:06.786 iops : min= 6, max= 64, avg=26.00, stdev=24.70, samples=5 00:17:06.786 lat (msec) : 100=0.52%, 2000=26.42%, >=2000=73.06% 00:17:06.786 cpu : usr=0.01%, sys=0.79%, ctx=340, majf=0, minf=32769 00:17:06.786 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.1%, 16=8.3%, 32=16.6%, >=64=67.4% 00:17:06.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.786 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.5% 00:17:06.786 issued rwts: total=193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.786 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.786 job4: (groupid=0, jobs=1): err= 0: pid=3024543: Wed Apr 24 17:21:13 2024 00:17:06.786 read: IOPS=16, BW=16.2MiB/s (17.0MB/s)(202MiB/12456msec) 00:17:06.786 slat (usec): min=110, max=4220.5k, avg=51174.11, stdev=354694.40 00:17:06.786 clat (msec): min=822, max=11662, avg=7406.84, stdev=4819.78 00:17:06.786 lat (msec): min=831, max=11668, avg=7458.02, stdev=4809.38 00:17:06.786 clat percentiles (msec): 00:17:06.786 | 1.00th=[ 852], 5.00th=[ 885], 10.00th=[ 978], 20.00th=[ 1062], 00:17:06.786 | 30.00th=[ 1200], 40.00th=[ 9597], 50.00th=[10805], 60.00th=[11073], 00:17:06.786 | 70.00th=[11342], 80.00th=[11476], 90.00th=[11610], 95.00th=[11610], 00:17:06.786 | 99.00th=[11610], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:17:06.786 | 99.99th=[11610] 00:17:06.786 bw ( KiB/s): min= 2003, max=71680, per=0.78%, avg=25592.50, stdev=33401.94, samples=6 00:17:06.786 iops : min= 1, max= 70, avg=24.83, stdev=32.76, samples=6 00:17:06.786 lat (msec) : 1000=15.35%, 2000=18.81%, >=2000=65.84% 00:17:06.786 cpu : usr=0.00%, sys=0.57%, ctx=640, majf=0, minf=32769 00:17:06.786 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=7.9%, 32=15.8%, >=64=68.8% 00:17:06.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.786 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:17:06.786 issued rwts: total=202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.786 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.786 job4: (groupid=0, jobs=1): err= 0: pid=3024544: Wed Apr 24 17:21:13 2024 00:17:06.786 read: IOPS=42, BW=42.2MiB/s (44.3MB/s)(436MiB/10322msec) 00:17:06.786 slat (usec): min=55, max=2064.8k, avg=23461.99, stdev=168428.61 00:17:06.786 clat (msec): min=89, max=7302, avg=2772.67, stdev=2588.16 00:17:06.786 lat (msec): min=799, max=7304, avg=2796.14, stdev=2589.85 00:17:06.786 clat percentiles (msec): 00:17:06.786 | 1.00th=[ 802], 5.00th=[ 827], 10.00th=[ 844], 20.00th=[ 869], 00:17:06.786 | 30.00th=[ 902], 40.00th=[ 1083], 50.00th=[ 1401], 60.00th=[ 1536], 00:17:06.786 | 70.00th=[ 3004], 80.00th=[ 6678], 90.00th=[ 7013], 95.00th=[ 7148], 00:17:06.786 | 99.00th=[ 7282], 99.50th=[ 7282], 99.90th=[ 7282], 99.95th=[ 7282], 00:17:06.786 | 99.99th=[ 7282] 00:17:06.786 bw ( KiB/s): min= 4096, max=159744, per=2.41%, avg=78848.00, stdev=67179.39, samples=8 00:17:06.786 iops : min= 4, max= 156, avg=77.00, stdev=65.60, samples=8 00:17:06.786 lat (msec) : 100=0.23%, 1000=38.53%, 2000=30.28%, >=2000=30.96% 00:17:06.786 cpu : usr=0.02%, sys=0.99%, ctx=780, majf=0, minf=32769 00:17:06.786 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.3%, >=64=85.6% 00:17:06.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.786 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:06.786 issued rwts: total=436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.786 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.786 job4: (groupid=0, jobs=1): err= 0: pid=3024545: Wed Apr 24 17:21:13 2024 00:17:06.786 read: IOPS=84, BW=84.1MiB/s (88.2MB/s)(1050MiB/12488msec) 00:17:06.786 slat (usec): min=43, max=2080.8k, avg=9871.89, stdev=85867.06 00:17:06.786 clat (msec): min=489, max=5854, avg=1363.64, stdev=1614.66 00:17:06.786 lat (msec): min=491, max=5856, avg=1373.52, stdev=1618.88 00:17:06.786 clat percentiles (msec): 00:17:06.786 | 1.00th=[ 493], 5.00th=[ 498], 10.00th=[ 523], 20.00th=[ 550], 00:17:06.786 | 30.00th=[ 625], 40.00th=[ 684], 50.00th=[ 735], 60.00th=[ 785], 00:17:06.786 | 70.00th=[ 844], 80.00th=[ 877], 90.00th=[ 5403], 95.00th=[ 5671], 00:17:06.786 | 99.00th=[ 5805], 99.50th=[ 5873], 99.90th=[ 5873], 99.95th=[ 5873], 00:17:06.786 | 99.99th=[ 5873] 00:17:06.786 bw ( KiB/s): min= 1896, max=270336, per=4.44%, avg=145374.54, stdev=87664.14, samples=13 00:17:06.786 iops : min= 1, max= 264, avg=141.85, stdev=85.73, samples=13 00:17:06.786 lat (msec) : 500=7.90%, 750=44.76%, 1000=29.81%, 2000=0.19%, >=2000=17.33% 00:17:06.786 cpu : usr=0.04%, sys=1.49%, ctx=983, majf=0, minf=32769 00:17:06.786 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:17:06.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.786 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.786 issued rwts: total=1050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.786 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.786 job4: (groupid=0, jobs=1): err= 0: pid=3024546: Wed Apr 24 17:21:13 2024 00:17:06.786 read: IOPS=21, BW=21.3MiB/s (22.3MB/s)(266MiB/12491msec) 00:17:06.786 slat (usec): min=472, max=4220.5k, avg=38967.06, stdev=317158.96 00:17:06.786 clat (msec): min=846, max=11542, avg=5794.29, stdev=5001.33 00:17:06.786 lat (msec): min=848, max=11554, avg=5833.26, stdev=5005.32 00:17:06.786 clat percentiles (msec): 00:17:06.786 | 1.00th=[ 869], 5.00th=[ 877], 10.00th=[ 885], 20.00th=[ 911], 00:17:06.786 | 30.00th=[ 978], 40.00th=[ 1053], 50.00th=[ 1150], 60.00th=[10805], 00:17:06.786 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11342], 95.00th=[11342], 00:17:06.786 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:17:06.786 | 99.99th=[11476] 00:17:06.786 bw ( KiB/s): min= 1882, max=145408, per=1.45%, avg=47417.67, stdev=58877.29, samples=6 00:17:06.786 iops : min= 1, max= 142, avg=46.17, stdev=57.63, samples=6 00:17:06.786 lat (msec) : 1000=31.58%, 2000=18.80%, >=2000=49.62% 00:17:06.786 cpu : usr=0.01%, sys=0.83%, ctx=816, majf=0, minf=32769 00:17:06.786 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.0%, 16=6.0%, 32=12.0%, >=64=76.3% 00:17:06.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.786 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:17:06.786 issued rwts: total=266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.786 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.786 job4: (groupid=0, jobs=1): err= 0: pid=3024547: Wed Apr 24 17:21:13 2024 00:17:06.786 read: IOPS=3, BW=3957KiB/s (4052kB/s)(48.0MiB/12421msec) 00:17:06.786 slat (usec): min=713, max=2077.9k, avg=214484.01, stdev=605092.29 00:17:06.786 clat (msec): min=2124, max=12387, avg=10076.71, stdev=2611.31 00:17:06.786 lat (msec): min=4179, max=12419, avg=10291.19, stdev=2354.43 00:17:06.786 clat percentiles (msec): 00:17:06.786 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8490], 00:17:06.786 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:17:06.786 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12416], 95.00th=[12416], 00:17:06.786 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:17:06.786 | 99.99th=[12416] 00:17:06.786 lat (msec) : >=2000=100.00% 00:17:06.786 cpu : usr=0.00%, sys=0.31%, ctx=70, majf=0, minf=12289 00:17:06.786 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:17:06.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.787 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.787 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.787 job4: (groupid=0, jobs=1): err= 0: pid=3024548: Wed Apr 24 17:21:13 2024 00:17:06.787 read: IOPS=20, BW=20.8MiB/s (21.8MB/s)(259MiB/12448msec) 00:17:06.787 slat (usec): min=471, max=2133.0k, avg=39882.08, stdev=248435.64 00:17:06.787 clat (msec): min=705, max=11063, avg=5666.41, stdev=4760.76 00:17:06.787 lat (msec): min=711, max=11068, avg=5706.29, stdev=4762.19 00:17:06.787 clat percentiles (msec): 00:17:06.787 | 1.00th=[ 709], 5.00th=[ 802], 10.00th=[ 835], 20.00th=[ 860], 00:17:06.787 | 30.00th=[ 902], 40.00th=[ 1070], 50.00th=[ 4665], 60.00th=[10268], 00:17:06.787 | 70.00th=[10402], 80.00th=[10671], 90.00th=[10939], 95.00th=[10939], 00:17:06.787 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:17:06.787 | 99.99th=[11073] 00:17:06.787 bw ( KiB/s): min= 2048, max=122880, per=1.18%, avg=38619.43, stdev=49200.74, samples=7 00:17:06.787 iops : min= 2, max= 120, avg=37.71, stdev=48.05, samples=7 00:17:06.787 lat (msec) : 750=1.54%, 1000=32.82%, 2000=14.29%, >=2000=51.35% 00:17:06.787 cpu : usr=0.01%, sys=0.61%, ctx=670, majf=0, minf=32769 00:17:06.787 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.1%, 16=6.2%, 32=12.4%, >=64=75.7% 00:17:06.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.787 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:17:06.787 issued rwts: total=259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.787 job4: (groupid=0, jobs=1): err= 0: pid=3024549: Wed Apr 24 17:21:13 2024 00:17:06.787 read: IOPS=59, BW=59.1MiB/s (61.9MB/s)(738MiB/12493msec) 00:17:06.787 slat (usec): min=48, max=2070.5k, avg=14042.22, stdev=126033.65 00:17:06.787 clat (msec): min=391, max=8750, avg=2103.51, stdev=2833.30 00:17:06.787 lat (msec): min=395, max=8758, avg=2117.55, stdev=2841.73 00:17:06.787 clat percentiles (msec): 00:17:06.787 | 1.00th=[ 397], 5.00th=[ 397], 10.00th=[ 405], 20.00th=[ 502], 00:17:06.787 | 30.00th=[ 592], 40.00th=[ 667], 50.00th=[ 944], 60.00th=[ 1070], 00:17:06.787 | 70.00th=[ 1167], 80.00th=[ 1385], 90.00th=[ 8423], 95.00th=[ 8658], 00:17:06.787 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:17:06.787 | 99.99th=[ 8792] 00:17:06.787 bw ( KiB/s): min= 1868, max=282624, per=3.47%, avg=113710.55, stdev=95060.36, samples=11 00:17:06.787 iops : min= 1, max= 276, avg=110.82, stdev=92.99, samples=11 00:17:06.787 lat (msec) : 500=19.92%, 750=24.25%, 1000=7.59%, 2000=28.86%, >=2000=19.38% 00:17:06.787 cpu : usr=0.04%, sys=1.51%, ctx=729, majf=0, minf=32769 00:17:06.787 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:17:06.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.787 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:06.787 issued rwts: total=738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.787 job4: (groupid=0, jobs=1): err= 0: pid=3024550: Wed Apr 24 17:21:13 2024 00:17:06.787 read: IOPS=52, BW=52.4MiB/s (55.0MB/s)(649MiB/12375msec) 00:17:06.787 slat (usec): min=75, max=2082.5k, avg=15792.69, stdev=128194.91 00:17:06.787 clat (msec): min=215, max=6787, avg=2057.26, stdev=2266.73 00:17:06.787 lat (msec): min=215, max=6789, avg=2073.06, stdev=2272.10 00:17:06.787 clat percentiles (msec): 00:17:06.787 | 1.00th=[ 232], 5.00th=[ 271], 10.00th=[ 296], 20.00th=[ 359], 00:17:06.787 | 30.00th=[ 393], 40.00th=[ 969], 50.00th=[ 1351], 60.00th=[ 1401], 00:17:06.787 | 70.00th=[ 1502], 80.00th=[ 3037], 90.00th=[ 6611], 95.00th=[ 6678], 00:17:06.787 | 99.00th=[ 6745], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:17:06.787 | 99.99th=[ 6812] 00:17:06.787 bw ( KiB/s): min= 1517, max=382976, per=3.26%, avg=106852.50, stdev=130187.61, samples=10 00:17:06.787 iops : min= 1, max= 374, avg=104.30, stdev=127.18, samples=10 00:17:06.787 lat (msec) : 250=2.47%, 500=31.12%, 750=3.08%, 1000=3.39%, 2000=35.13% 00:17:06.787 lat (msec) : >=2000=24.81% 00:17:06.787 cpu : usr=0.00%, sys=1.10%, ctx=999, majf=0, minf=32769 00:17:06.787 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:17:06.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.787 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:06.787 issued rwts: total=649,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.787 job4: (groupid=0, jobs=1): err= 0: pid=3024551: Wed Apr 24 17:21:13 2024 00:17:06.787 read: IOPS=15, BW=15.9MiB/s (16.6MB/s)(164MiB/10329msec) 00:17:06.787 slat (usec): min=369, max=2093.3k, avg=62300.18, stdev=317271.72 00:17:06.787 clat (msec): min=110, max=9782, avg=7323.99, stdev=2907.63 00:17:06.787 lat (msec): min=1391, max=9783, avg=7386.29, stdev=2853.59 00:17:06.787 clat percentiles (msec): 00:17:06.787 | 1.00th=[ 1385], 5.00th=[ 1418], 10.00th=[ 1452], 20.00th=[ 4329], 00:17:06.787 | 30.00th=[ 7617], 40.00th=[ 8792], 50.00th=[ 8926], 60.00th=[ 9060], 00:17:06.787 | 70.00th=[ 9194], 80.00th=[ 9329], 90.00th=[ 9597], 95.00th=[ 9731], 00:17:06.787 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:17:06.787 | 99.99th=[ 9731] 00:17:06.787 bw ( KiB/s): min= 4096, max=34816, per=0.45%, avg=14745.60, stdev=12064.09, samples=5 00:17:06.787 iops : min= 4, max= 34, avg=14.40, stdev=11.78, samples=5 00:17:06.787 lat (msec) : 250=0.61%, 2000=10.37%, >=2000=89.02% 00:17:06.787 cpu : usr=0.01%, sys=0.71%, ctx=320, majf=0, minf=32769 00:17:06.787 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.9%, 16=9.8%, 32=19.5%, >=64=61.6% 00:17:06.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.787 complete : 0=0.0%, 4=97.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.6% 00:17:06.787 issued rwts: total=164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.787 job4: (groupid=0, jobs=1): err= 0: pid=3024552: Wed Apr 24 17:21:13 2024 00:17:06.787 read: IOPS=27, BW=27.9MiB/s (29.3MB/s)(289MiB/10347msec) 00:17:06.787 slat (usec): min=430, max=2061.8k, avg=35478.74, stdev=206347.62 00:17:06.787 clat (msec): min=91, max=8029, avg=4167.58, stdev=2630.34 00:17:06.787 lat (msec): min=1741, max=8052, avg=4203.06, stdev=2622.67 00:17:06.787 clat percentiles (msec): 00:17:06.787 | 1.00th=[ 1737], 5.00th=[ 1770], 10.00th=[ 1787], 20.00th=[ 1821], 00:17:06.787 | 30.00th=[ 1838], 40.00th=[ 1888], 50.00th=[ 1955], 60.00th=[ 6409], 00:17:06.787 | 70.00th=[ 6812], 80.00th=[ 7349], 90.00th=[ 7684], 95.00th=[ 7819], 00:17:06.787 | 99.00th=[ 7953], 99.50th=[ 8020], 99.90th=[ 8020], 99.95th=[ 8020], 00:17:06.787 | 99.99th=[ 8020] 00:17:06.787 bw ( KiB/s): min= 4096, max=86016, per=1.26%, avg=41216.00, stdev=32976.48, samples=8 00:17:06.787 iops : min= 4, max= 84, avg=40.25, stdev=32.20, samples=8 00:17:06.787 lat (msec) : 100=0.35%, 2000=52.25%, >=2000=47.40% 00:17:06.787 cpu : usr=0.01%, sys=0.93%, ctx=862, majf=0, minf=32769 00:17:06.787 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.5%, 32=11.1%, >=64=78.2% 00:17:06.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.787 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:06.787 issued rwts: total=289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.787 job4: (groupid=0, jobs=1): err= 0: pid=3024553: Wed Apr 24 17:21:13 2024 00:17:06.787 read: IOPS=20, BW=20.0MiB/s (21.0MB/s)(249MiB/12434msec) 00:17:06.787 slat (usec): min=558, max=2085.9k, avg=41426.94, stdev=261166.21 00:17:06.787 clat (msec): min=771, max=11470, avg=6029.99, stdev=4927.46 00:17:06.787 lat (msec): min=778, max=11482, avg=6071.42, stdev=4929.37 00:17:06.787 clat percentiles (msec): 00:17:06.787 | 1.00th=[ 776], 5.00th=[ 810], 10.00th=[ 827], 20.00th=[ 835], 00:17:06.787 | 30.00th=[ 844], 40.00th=[ 869], 50.00th=[ 6409], 60.00th=[10805], 00:17:06.787 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11208], 95.00th=[11342], 00:17:06.787 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:17:06.787 | 99.99th=[11476] 00:17:06.787 bw ( KiB/s): min= 2052, max=133120, per=1.09%, avg=35694.29, stdev=53363.73, samples=7 00:17:06.787 iops : min= 2, max= 130, avg=34.86, stdev=52.11, samples=7 00:17:06.787 lat (msec) : 1000=44.58%, >=2000=55.42% 00:17:06.787 cpu : usr=0.00%, sys=0.57%, ctx=777, majf=0, minf=32769 00:17:06.787 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.4%, 32=12.9%, >=64=74.7% 00:17:06.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.787 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:17:06.787 issued rwts: total=249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.787 job4: (groupid=0, jobs=1): err= 0: pid=3024554: Wed Apr 24 17:21:13 2024 00:17:06.787 read: IOPS=51, BW=51.9MiB/s (54.4MB/s)(641MiB/12360msec) 00:17:06.787 slat (usec): min=65, max=2148.3k, avg=15939.64, stdev=118010.07 00:17:06.787 clat (msec): min=351, max=4275, avg=2026.99, stdev=1154.28 00:17:06.787 lat (msec): min=352, max=6423, avg=2042.93, stdev=1164.26 00:17:06.787 clat percentiles (msec): 00:17:06.787 | 1.00th=[ 359], 5.00th=[ 372], 10.00th=[ 393], 20.00th=[ 1150], 00:17:06.787 | 30.00th=[ 1301], 40.00th=[ 1385], 50.00th=[ 1435], 60.00th=[ 2635], 00:17:06.787 | 70.00th=[ 2769], 80.00th=[ 3004], 90.00th=[ 3809], 95.00th=[ 3910], 00:17:06.787 | 99.00th=[ 3977], 99.50th=[ 4178], 99.90th=[ 4279], 99.95th=[ 4279], 00:17:06.787 | 99.99th=[ 4279] 00:17:06.787 bw ( KiB/s): min= 1517, max=247808, per=3.57%, avg=116904.56, stdev=68301.82, samples=9 00:17:06.787 iops : min= 1, max= 242, avg=114.11, stdev=66.80, samples=9 00:17:06.787 lat (msec) : 500=10.30%, 1000=4.06%, 2000=43.84%, >=2000=41.81% 00:17:06.787 cpu : usr=0.02%, sys=1.12%, ctx=1070, majf=0, minf=32769 00:17:06.787 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:17:06.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.787 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:06.787 issued rwts: total=641,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.787 job5: (groupid=0, jobs=1): err= 0: pid=3024555: Wed Apr 24 17:21:13 2024 00:17:06.787 read: IOPS=2, BW=2678KiB/s (2743kB/s)(27.0MiB/10323msec) 00:17:06.787 slat (msec): min=2, max=2116, avg=377.88, stdev=776.98 00:17:06.787 clat (msec): min=119, max=10313, avg=5750.53, stdev=3411.02 00:17:06.787 lat (msec): min=2137, max=10322, avg=6128.41, stdev=3327.33 00:17:06.787 clat percentiles (msec): 00:17:06.787 | 1.00th=[ 120], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2198], 00:17:06.787 | 30.00th=[ 2232], 40.00th=[ 4329], 50.00th=[ 6477], 60.00th=[ 6544], 00:17:06.788 | 70.00th=[ 8658], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:17:06.788 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:17:06.788 | 99.99th=[10268] 00:17:06.788 lat (msec) : 250=3.70%, >=2000=96.30% 00:17:06.788 cpu : usr=0.00%, sys=0.16%, ctx=70, majf=0, minf=6913 00:17:06.788 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:17:06.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.788 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:06.788 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.788 job5: (groupid=0, jobs=1): err= 0: pid=3024556: Wed Apr 24 17:21:13 2024 00:17:06.788 read: IOPS=201, BW=201MiB/s (211MB/s)(2073MiB/10306msec) 00:17:06.788 slat (usec): min=36, max=2056.3k, avg=4822.36, stdev=85323.28 00:17:06.788 clat (msec): min=102, max=8247, avg=319.28, stdev=1037.68 00:17:06.788 lat (msec): min=103, max=8248, avg=324.10, stdev=1054.17 00:17:06.788 clat percentiles (msec): 00:17:06.788 | 1.00th=[ 104], 5.00th=[ 105], 10.00th=[ 105], 20.00th=[ 106], 00:17:06.788 | 30.00th=[ 106], 40.00th=[ 107], 50.00th=[ 110], 60.00th=[ 117], 00:17:06.788 | 70.00th=[ 123], 80.00th=[ 123], 90.00th=[ 127], 95.00th=[ 388], 00:17:06.788 | 99.00th=[ 6611], 99.50th=[ 8221], 99.90th=[ 8221], 99.95th=[ 8221], 00:17:06.788 | 99.99th=[ 8221] 00:17:06.788 bw ( KiB/s): min=544768, max=1189888, per=29.90%, avg=980072.50, stdev=294801.52, samples=4 00:17:06.788 iops : min= 532, max= 1162, avg=957.00, stdev=287.83, samples=4 00:17:06.788 lat (msec) : 250=90.21%, 500=5.98%, >=2000=3.81% 00:17:06.788 cpu : usr=0.04%, sys=1.91%, ctx=1938, majf=0, minf=32769 00:17:06.788 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:17:06.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.788 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.788 issued rwts: total=2073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.788 job5: (groupid=0, jobs=1): err= 0: pid=3024557: Wed Apr 24 17:21:13 2024 00:17:06.788 read: IOPS=146, BW=146MiB/s (153MB/s)(1522MiB/10406msec) 00:17:06.788 slat (usec): min=50, max=2057.0k, avg=6756.87, stdev=98891.16 00:17:06.788 clat (msec): min=118, max=6653, avg=767.31, stdev=1684.43 00:17:06.788 lat (msec): min=123, max=6653, avg=774.06, stdev=1690.92 00:17:06.788 clat percentiles (msec): 00:17:06.788 | 1.00th=[ 124], 5.00th=[ 124], 10.00th=[ 124], 20.00th=[ 125], 00:17:06.788 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 138], 60.00th=[ 234], 00:17:06.788 | 70.00th=[ 249], 80.00th=[ 355], 90.00th=[ 2165], 95.00th=[ 6544], 00:17:06.788 | 99.00th=[ 6611], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:17:06.788 | 99.99th=[ 6678] 00:17:06.788 bw ( KiB/s): min=26624, max=1042432, per=12.44%, avg=407844.57, stdev=385420.38, samples=7 00:17:06.788 iops : min= 26, max= 1018, avg=398.29, stdev=376.39, samples=7 00:17:06.788 lat (msec) : 250=72.60%, 500=14.32%, 2000=2.89%, >=2000=10.18% 00:17:06.788 cpu : usr=0.03%, sys=1.68%, ctx=1409, majf=0, minf=32769 00:17:06.788 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.9% 00:17:06.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.788 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.788 issued rwts: total=1522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.788 job5: (groupid=0, jobs=1): err= 0: pid=3024558: Wed Apr 24 17:21:13 2024 00:17:06.788 read: IOPS=33, BW=33.0MiB/s (34.6MB/s)(344MiB/10415msec) 00:17:06.788 slat (usec): min=45, max=2077.5k, avg=29947.49, stdev=203387.14 00:17:06.788 clat (msec): min=111, max=4919, avg=2366.74, stdev=1652.06 00:17:06.788 lat (msec): min=894, max=4937, avg=2396.69, stdev=1653.28 00:17:06.788 clat percentiles (msec): 00:17:06.788 | 1.00th=[ 894], 5.00th=[ 919], 10.00th=[ 927], 20.00th=[ 978], 00:17:06.788 | 30.00th=[ 1011], 40.00th=[ 1045], 50.00th=[ 1083], 60.00th=[ 2668], 00:17:06.788 | 70.00th=[ 4077], 80.00th=[ 4396], 90.00th=[ 4665], 95.00th=[ 4799], 00:17:06.788 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 4933], 00:17:06.788 | 99.99th=[ 4933] 00:17:06.788 bw ( KiB/s): min= 6144, max=145408, per=2.25%, avg=73728.00, stdev=62589.78, samples=6 00:17:06.788 iops : min= 6, max= 142, avg=72.00, stdev=61.12, samples=6 00:17:06.788 lat (msec) : 250=0.29%, 1000=27.03%, 2000=30.81%, >=2000=41.86% 00:17:06.788 cpu : usr=0.00%, sys=0.89%, ctx=687, majf=0, minf=32769 00:17:06.788 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.7%, 32=9.3%, >=64=81.7% 00:17:06.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.788 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:06.788 issued rwts: total=344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.788 job5: (groupid=0, jobs=1): err= 0: pid=3024559: Wed Apr 24 17:21:13 2024 00:17:06.788 read: IOPS=80, BW=80.6MiB/s (84.6MB/s)(839MiB/10404msec) 00:17:06.788 slat (usec): min=466, max=2069.2k, avg=12251.96, stdev=117524.12 00:17:06.788 clat (msec): min=121, max=5202, avg=898.26, stdev=784.25 00:17:06.788 lat (msec): min=294, max=5223, avg=910.51, stdev=799.40 00:17:06.788 clat percentiles (msec): 00:17:06.788 | 1.00th=[ 300], 5.00th=[ 317], 10.00th=[ 326], 20.00th=[ 338], 00:17:06.788 | 30.00th=[ 351], 40.00th=[ 376], 50.00th=[ 397], 60.00th=[ 818], 00:17:06.788 | 70.00th=[ 1133], 80.00th=[ 1334], 90.00th=[ 2366], 95.00th=[ 2467], 00:17:06.788 | 99.00th=[ 2534], 99.50th=[ 3406], 99.90th=[ 5201], 99.95th=[ 5201], 00:17:06.788 | 99.99th=[ 5201] 00:17:06.788 bw ( KiB/s): min=65536, max=378880, per=6.35%, avg=208018.29, stdev=138669.70, samples=7 00:17:06.788 iops : min= 64, max= 370, avg=203.14, stdev=135.42, samples=7 00:17:06.788 lat (msec) : 250=0.12%, 500=56.14%, 750=3.22%, 1000=3.34%, 2000=21.45% 00:17:06.788 lat (msec) : >=2000=15.73% 00:17:06.788 cpu : usr=0.00%, sys=1.14%, ctx=1422, majf=0, minf=32769 00:17:06.788 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:17:06.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.788 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.788 issued rwts: total=839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.788 job5: (groupid=0, jobs=1): err= 0: pid=3024560: Wed Apr 24 17:21:13 2024 00:17:06.788 read: IOPS=24, BW=24.5MiB/s (25.6MB/s)(254MiB/10385msec) 00:17:06.788 slat (usec): min=701, max=2124.1k, avg=40402.67, stdev=216749.39 00:17:06.788 clat (msec): min=120, max=5122, avg=2351.73, stdev=765.95 00:17:06.788 lat (msec): min=1111, max=6918, avg=2392.13, stdev=801.99 00:17:06.788 clat percentiles (msec): 00:17:06.788 | 1.00th=[ 1116], 5.00th=[ 1133], 10.00th=[ 1150], 20.00th=[ 1351], 00:17:06.788 | 30.00th=[ 2072], 40.00th=[ 2366], 50.00th=[ 2567], 60.00th=[ 2769], 00:17:06.788 | 70.00th=[ 2869], 80.00th=[ 3037], 90.00th=[ 3138], 95.00th=[ 3272], 00:17:06.788 | 99.00th=[ 3440], 99.50th=[ 3473], 99.90th=[ 5134], 99.95th=[ 5134], 00:17:06.788 | 99.99th=[ 5134] 00:17:06.788 bw ( KiB/s): min=12312, max=118784, per=1.97%, avg=64518.00, stdev=58051.59, samples=4 00:17:06.788 iops : min= 12, max= 116, avg=63.00, stdev=56.70, samples=4 00:17:06.788 lat (msec) : 250=0.39%, 2000=27.56%, >=2000=72.05% 00:17:06.788 cpu : usr=0.00%, sys=0.94%, ctx=1406, majf=0, minf=32769 00:17:06.788 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.3%, 32=12.6%, >=64=75.2% 00:17:06.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.788 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:17:06.788 issued rwts: total=254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.788 job5: (groupid=0, jobs=1): err= 0: pid=3024561: Wed Apr 24 17:21:13 2024 00:17:06.788 read: IOPS=3, BW=3765KiB/s (3856kB/s)(38.0MiB/10334msec) 00:17:06.788 slat (usec): min=691, max=2084.7k, avg=268776.07, stdev=670488.46 00:17:06.788 clat (msec): min=119, max=10332, avg=5916.39, stdev=2756.79 00:17:06.788 lat (msec): min=2204, max=10333, avg=6185.16, stdev=2672.95 00:17:06.788 clat percentiles (msec): 00:17:06.788 | 1.00th=[ 120], 5.00th=[ 2198], 10.00th=[ 2232], 20.00th=[ 4329], 00:17:06.788 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 4396], 60.00th=[ 6477], 00:17:06.788 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[10268], 95.00th=[10268], 00:17:06.788 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:17:06.788 | 99.99th=[10268] 00:17:06.788 lat (msec) : 250=2.63%, >=2000=97.37% 00:17:06.788 cpu : usr=0.00%, sys=0.28%, ctx=67, majf=0, minf=9729 00:17:06.788 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:17:06.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.788 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.788 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.788 job5: (groupid=0, jobs=1): err= 0: pid=3024562: Wed Apr 24 17:21:13 2024 00:17:06.788 read: IOPS=111, BW=111MiB/s (117MB/s)(1149MiB/10326msec) 00:17:06.788 slat (usec): min=47, max=2129.2k, avg=8879.66, stdev=88134.20 00:17:06.788 clat (msec): min=117, max=3465, avg=1064.03, stdev=1049.46 00:17:06.788 lat (msec): min=262, max=3471, avg=1072.91, stdev=1053.51 00:17:06.788 clat percentiles (msec): 00:17:06.788 | 1.00th=[ 262], 5.00th=[ 264], 10.00th=[ 266], 20.00th=[ 268], 00:17:06.789 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 523], 60.00th=[ 1028], 00:17:06.789 | 70.00th=[ 1183], 80.00th=[ 2299], 90.00th=[ 3306], 95.00th=[ 3406], 00:17:06.789 | 99.00th=[ 3440], 99.50th=[ 3473], 99.90th=[ 3473], 99.95th=[ 3473], 00:17:06.789 | 99.99th=[ 3473] 00:17:06.789 bw ( KiB/s): min=28672, max=492582, per=5.31%, avg=174168.50, stdev=165825.93, samples=12 00:17:06.789 iops : min= 28, max= 481, avg=170.08, stdev=161.93, samples=12 00:17:06.789 lat (msec) : 250=0.09%, 500=49.52%, 750=4.18%, 1000=4.70%, 2000=19.41% 00:17:06.789 lat (msec) : >=2000=22.11% 00:17:06.789 cpu : usr=0.01%, sys=1.39%, ctx=1562, majf=0, minf=32769 00:17:06.789 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:17:06.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.789 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.789 issued rwts: total=1149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.789 job5: (groupid=0, jobs=1): err= 0: pid=3024563: Wed Apr 24 17:21:13 2024 00:17:06.789 read: IOPS=78, BW=78.9MiB/s (82.7MB/s)(822MiB/10423msec) 00:17:06.789 slat (usec): min=37, max=2013.9k, avg=12529.28, stdev=102956.02 00:17:06.789 clat (msec): min=119, max=6441, avg=1541.47, stdev=1199.86 00:17:06.789 lat (msec): min=433, max=6457, avg=1554.00, stdev=1201.25 00:17:06.789 clat percentiles (msec): 00:17:06.789 | 1.00th=[ 485], 5.00th=[ 498], 10.00th=[ 550], 20.00th=[ 676], 00:17:06.789 | 30.00th=[ 718], 40.00th=[ 802], 50.00th=[ 911], 60.00th=[ 986], 00:17:06.789 | 70.00th=[ 2198], 80.00th=[ 2769], 90.00th=[ 3675], 95.00th=[ 4010], 00:17:06.789 | 99.00th=[ 4212], 99.50th=[ 4212], 99.90th=[ 6409], 99.95th=[ 6409], 00:17:06.789 | 99.99th=[ 6409] 00:17:06.789 bw ( KiB/s): min=10240, max=278528, per=3.61%, avg=118442.67, stdev=83486.79, samples=12 00:17:06.789 iops : min= 10, max= 272, avg=115.67, stdev=81.53, samples=12 00:17:06.789 lat (msec) : 250=0.12%, 500=7.06%, 750=27.74%, 1000=28.35%, 2000=4.26% 00:17:06.789 lat (msec) : >=2000=32.48% 00:17:06.789 cpu : usr=0.02%, sys=1.61%, ctx=1183, majf=0, minf=32769 00:17:06.789 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.3% 00:17:06.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.789 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.789 issued rwts: total=822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.789 job5: (groupid=0, jobs=1): err= 0: pid=3024564: Wed Apr 24 17:21:13 2024 00:17:06.789 read: IOPS=190, BW=190MiB/s (199MB/s)(1907MiB/10035msec) 00:17:06.789 slat (usec): min=53, max=2054.5k, avg=5241.78, stdev=75802.13 00:17:06.789 clat (msec): min=32, max=6709, avg=465.55, stdev=778.25 00:17:06.789 lat (msec): min=35, max=6739, avg=470.80, stdev=790.64 00:17:06.789 clat percentiles (msec): 00:17:06.789 | 1.00th=[ 74], 5.00th=[ 125], 10.00th=[ 142], 20.00th=[ 182], 00:17:06.789 | 30.00th=[ 230], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:17:06.789 | 70.00th=[ 275], 80.00th=[ 342], 90.00th=[ 388], 95.00th=[ 2500], 00:17:06.789 | 99.00th=[ 4212], 99.50th=[ 4212], 99.90th=[ 4665], 99.95th=[ 6678], 00:17:06.789 | 99.99th=[ 6678] 00:17:06.789 bw ( KiB/s): min=40960, max=720896, per=12.36%, avg=405048.89, stdev=206885.45, samples=9 00:17:06.789 iops : min= 40, max= 704, avg=395.56, stdev=202.04, samples=9 00:17:06.789 lat (msec) : 50=0.47%, 100=1.26%, 250=60.41%, 500=29.42%, >=2000=8.44% 00:17:06.789 cpu : usr=0.04%, sys=2.30%, ctx=1727, majf=0, minf=32769 00:17:06.789 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:17:06.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.789 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:06.789 issued rwts: total=1907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.789 job5: (groupid=0, jobs=1): err= 0: pid=3024565: Wed Apr 24 17:21:13 2024 00:17:06.789 read: IOPS=10, BW=10.3MiB/s (10.8MB/s)(106MiB/10321msec) 00:17:06.789 slat (usec): min=379, max=2107.3k, avg=96304.71, stdev=403584.69 00:17:06.789 clat (msec): min=111, max=10315, avg=9279.49, stdev=2094.65 00:17:06.789 lat (msec): min=2145, max=10320, avg=9375.79, stdev=1894.20 00:17:06.789 clat percentiles (msec): 00:17:06.789 | 1.00th=[ 2140], 5.00th=[ 4329], 10.00th=[ 6544], 20.00th=[ 9731], 00:17:06.789 | 30.00th=[ 9866], 40.00th=[ 9866], 50.00th=[10000], 60.00th=[10000], 00:17:06.789 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10268], 95.00th=[10268], 00:17:06.789 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:17:06.789 | 99.99th=[10268] 00:17:06.789 lat (msec) : 250=0.94%, >=2000=99.06% 00:17:06.789 cpu : usr=0.00%, sys=0.72%, ctx=110, majf=0, minf=27137 00:17:06.789 IO depths : 1=0.9%, 2=1.9%, 4=3.8%, 8=7.5%, 16=15.1%, 32=30.2%, >=64=40.6% 00:17:06.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.789 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:06.789 issued rwts: total=106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.789 job5: (groupid=0, jobs=1): err= 0: pid=3024566: Wed Apr 24 17:21:13 2024 00:17:06.789 read: IOPS=3, BW=3660KiB/s (3748kB/s)(37.0MiB/10351msec) 00:17:06.789 slat (usec): min=811, max=2086.8k, avg=276562.59, stdev=689192.95 00:17:06.789 clat (msec): min=117, max=10349, avg=8067.43, stdev=2610.94 00:17:06.789 lat (msec): min=2197, max=10350, avg=8343.99, stdev=2264.46 00:17:06.789 clat percentiles (msec): 00:17:06.789 | 1.00th=[ 118], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 6477], 00:17:06.789 | 30.00th=[ 8557], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[ 8658], 00:17:06.789 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:17:06.789 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:17:06.789 | 99.99th=[10402] 00:17:06.789 lat (msec) : 250=2.70%, >=2000=97.30% 00:17:06.789 cpu : usr=0.00%, sys=0.26%, ctx=66, majf=0, minf=9473 00:17:06.789 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:17:06.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.789 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.789 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.789 job5: (groupid=0, jobs=1): err= 0: pid=3024567: Wed Apr 24 17:21:13 2024 00:17:06.789 read: IOPS=4, BW=4722KiB/s (4835kB/s)(48.0MiB/10409msec) 00:17:06.789 slat (usec): min=717, max=2112.8k, avg=214512.71, stdev=609672.00 00:17:06.789 clat (msec): min=111, max=10406, avg=8785.78, stdev=2719.58 00:17:06.789 lat (msec): min=2166, max=10408, avg=9000.29, stdev=2409.21 00:17:06.789 clat percentiles (msec): 00:17:06.789 | 1.00th=[ 112], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 6477], 00:17:06.789 | 30.00th=[ 8658], 40.00th=[10268], 50.00th=[10268], 60.00th=[10268], 00:17:06.789 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:17:06.789 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:17:06.789 | 99.99th=[10402] 00:17:06.789 lat (msec) : 250=2.08%, >=2000=97.92% 00:17:06.789 cpu : usr=0.00%, sys=0.36%, ctx=97, majf=0, minf=12289 00:17:06.789 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:17:06.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.789 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:06.789 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:06.789 00:17:06.789 Run status group 0 (all jobs): 00:17:06.789 READ: bw=3201MiB/s (3357MB/s), 659KiB/s-301MiB/s (674kB/s-316MB/s), io=39.1GiB (42.0GB), run=10012-12514msec 00:17:06.789 00:17:06.789 Disk stats (read/write): 00:17:06.789 nvme0n1: ios=3466/0, merge=0/0, ticks=6145710/0, in_queue=6145710, util=98.61% 00:17:06.789 nvme1n1: ios=75958/0, merge=0/0, ticks=10012236/0, in_queue=10012236, util=98.87% 00:17:06.789 nvme2n1: ios=54211/0, merge=0/0, ticks=8070674/0, in_queue=8070674, util=98.89% 00:17:06.789 nvme3n1: ios=71090/0, merge=0/0, ticks=7179917/0, in_queue=7179917, util=99.05% 00:17:06.789 nvme4n1: ios=41367/0, merge=0/0, ticks=7755486/0, in_queue=7755486, util=99.20% 00:17:06.789 nvme5n1: ios=72538/0, merge=0/0, ticks=7286758/0, in_queue=7286758, util=98.88% 00:17:06.789 17:21:14 -- target/srq_overwhelm.sh@38 -- # sync 00:17:06.789 17:21:14 -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:17:06.789 17:21:14 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:06.789 17:21:14 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:17:06.789 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.789 17:21:15 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:17:06.789 17:21:15 -- common/autotest_common.sh@1205 -- # local i=0 00:17:06.789 17:21:15 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:06.790 17:21:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000000 00:17:06.790 17:21:15 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:06.790 17:21:15 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000000 00:17:06.790 17:21:15 -- common/autotest_common.sh@1217 -- # return 0 00:17:06.790 17:21:15 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:06.790 17:21:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.790 17:21:15 -- common/autotest_common.sh@10 -- # set +x 00:17:06.790 17:21:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.790 17:21:15 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:06.790 17:21:15 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:07.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.047 17:21:16 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:17:07.048 17:21:16 -- common/autotest_common.sh@1205 -- # local i=0 00:17:07.048 17:21:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:07.048 17:21:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000001 00:17:07.048 17:21:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:07.048 17:21:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000001 00:17:07.048 17:21:16 -- common/autotest_common.sh@1217 -- # return 0 00:17:07.048 17:21:16 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.048 17:21:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.048 17:21:16 -- common/autotest_common.sh@10 -- # set +x 00:17:07.048 17:21:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.048 17:21:16 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:07.048 17:21:16 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:17:07.981 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:17:07.981 17:21:17 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:17:07.981 17:21:17 -- common/autotest_common.sh@1205 -- # local i=0 00:17:07.981 17:21:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:07.981 17:21:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000002 00:17:07.981 17:21:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:07.981 17:21:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000002 00:17:07.981 17:21:17 -- common/autotest_common.sh@1217 -- # return 0 00:17:07.981 17:21:17 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:07.981 17:21:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.981 17:21:17 -- common/autotest_common.sh@10 -- # set +x 00:17:07.981 17:21:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.981 17:21:17 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:07.981 17:21:17 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:17:09.353 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:17:09.353 17:21:18 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:17:09.353 17:21:18 -- common/autotest_common.sh@1205 -- # local i=0 00:17:09.353 17:21:18 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:09.353 17:21:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000003 00:17:09.353 17:21:18 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:09.353 17:21:18 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000003 00:17:09.353 17:21:18 -- common/autotest_common.sh@1217 -- # return 0 00:17:09.353 17:21:18 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:09.353 17:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.353 17:21:18 -- common/autotest_common.sh@10 -- # set +x 00:17:09.353 17:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.353 17:21:18 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:09.353 17:21:18 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:17:10.285 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:17:10.285 17:21:19 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:17:10.285 17:21:19 -- common/autotest_common.sh@1205 -- # local i=0 00:17:10.285 17:21:19 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:10.285 17:21:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000004 00:17:10.285 17:21:19 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:10.286 17:21:19 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000004 00:17:10.286 17:21:19 -- common/autotest_common.sh@1217 -- # return 0 00:17:10.286 17:21:19 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:10.286 17:21:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.286 17:21:19 -- common/autotest_common.sh@10 -- # set +x 00:17:10.286 17:21:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.286 17:21:19 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:10.286 17:21:19 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:17:11.218 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:17:11.218 17:21:20 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:17:11.218 17:21:20 -- common/autotest_common.sh@1205 -- # local i=0 00:17:11.218 17:21:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:11.218 17:21:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000005 00:17:11.218 17:21:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:11.218 17:21:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000005 00:17:11.218 17:21:20 -- common/autotest_common.sh@1217 -- # return 0 00:17:11.218 17:21:20 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:17:11.218 17:21:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.218 17:21:20 -- common/autotest_common.sh@10 -- # set +x 00:17:11.218 17:21:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.218 17:21:20 -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:11.218 17:21:20 -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:17:11.218 17:21:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:11.218 17:21:20 -- nvmf/common.sh@117 -- # sync 00:17:11.218 17:21:20 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:11.218 17:21:20 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:11.218 17:21:20 -- nvmf/common.sh@120 -- # set +e 00:17:11.218 17:21:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:11.218 17:21:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:11.218 rmmod nvme_rdma 00:17:11.218 rmmod nvme_fabrics 00:17:11.218 17:21:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:11.218 17:21:20 -- nvmf/common.sh@124 -- # set -e 00:17:11.218 17:21:20 -- nvmf/common.sh@125 -- # return 0 00:17:11.218 17:21:20 -- nvmf/common.sh@478 -- # '[' -n 3024107 ']' 00:17:11.218 17:21:20 -- nvmf/common.sh@479 -- # killprocess 3024107 00:17:11.218 17:21:20 -- common/autotest_common.sh@936 -- # '[' -z 3024107 ']' 00:17:11.218 17:21:20 -- common/autotest_common.sh@940 -- # kill -0 3024107 00:17:11.218 17:21:20 -- common/autotest_common.sh@941 -- # uname 00:17:11.218 17:21:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:11.218 17:21:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3024107 00:17:11.218 17:21:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:11.218 17:21:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:11.218 17:21:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3024107' 00:17:11.218 killing process with pid 3024107 00:17:11.218 17:21:20 -- common/autotest_common.sh@955 -- # kill 3024107 00:17:11.218 17:21:20 -- common/autotest_common.sh@960 -- # wait 3024107 00:17:11.476 17:21:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:11.476 17:21:20 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:11.476 00:17:11.476 real 0m32.941s 00:17:11.476 user 1m55.450s 00:17:11.476 sys 0m13.361s 00:17:11.476 17:21:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:11.476 17:21:20 -- common/autotest_common.sh@10 -- # set +x 00:17:11.476 ************************************ 00:17:11.476 END TEST nvmf_srq_overwhelm 00:17:11.476 ************************************ 00:17:11.476 17:21:20 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:17:11.477 17:21:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:11.477 17:21:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:11.477 17:21:20 -- common/autotest_common.sh@10 -- # set +x 00:17:11.742 ************************************ 00:17:11.743 START TEST nvmf_shutdown 00:17:11.743 ************************************ 00:17:11.743 17:21:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:17:11.743 * Looking for test storage... 00:17:11.743 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:11.743 17:21:20 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.743 17:21:20 -- nvmf/common.sh@7 -- # uname -s 00:17:11.743 17:21:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.743 17:21:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.743 17:21:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.743 17:21:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.743 17:21:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.743 17:21:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.743 17:21:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.743 17:21:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.743 17:21:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.743 17:21:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.743 17:21:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:11.743 17:21:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:17:11.743 17:21:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.743 17:21:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.743 17:21:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.743 17:21:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.743 17:21:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:11.743 17:21:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.743 17:21:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.743 17:21:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.743 17:21:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.743 17:21:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.743 17:21:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.743 17:21:20 -- paths/export.sh@5 -- # export PATH 00:17:11.743 17:21:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.743 17:21:20 -- nvmf/common.sh@47 -- # : 0 00:17:11.743 17:21:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.743 17:21:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.743 17:21:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.743 17:21:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.743 17:21:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.743 17:21:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.743 17:21:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.743 17:21:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.743 17:21:20 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:11.743 17:21:20 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:11.743 17:21:20 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:17:11.743 17:21:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:11.743 17:21:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:11.743 17:21:20 -- common/autotest_common.sh@10 -- # set +x 00:17:12.003 ************************************ 00:17:12.003 START TEST nvmf_shutdown_tc1 00:17:12.003 ************************************ 00:17:12.003 17:21:21 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:17:12.003 17:21:21 -- target/shutdown.sh@74 -- # starttarget 00:17:12.003 17:21:21 -- target/shutdown.sh@15 -- # nvmftestinit 00:17:12.003 17:21:21 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:12.003 17:21:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.003 17:21:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:12.003 17:21:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:12.003 17:21:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:12.003 17:21:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.003 17:21:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.003 17:21:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.003 17:21:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:12.003 17:21:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:12.003 17:21:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:12.003 17:21:21 -- common/autotest_common.sh@10 -- # set +x 00:17:17.270 17:21:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:17.270 17:21:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:17.270 17:21:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:17.270 17:21:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:17.270 17:21:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:17.270 17:21:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:17.270 17:21:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:17.270 17:21:26 -- nvmf/common.sh@295 -- # net_devs=() 00:17:17.270 17:21:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:17.270 17:21:26 -- nvmf/common.sh@296 -- # e810=() 00:17:17.270 17:21:26 -- nvmf/common.sh@296 -- # local -ga e810 00:17:17.270 17:21:26 -- nvmf/common.sh@297 -- # x722=() 00:17:17.270 17:21:26 -- nvmf/common.sh@297 -- # local -ga x722 00:17:17.270 17:21:26 -- nvmf/common.sh@298 -- # mlx=() 00:17:17.270 17:21:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:17.271 17:21:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.271 17:21:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.271 17:21:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.271 17:21:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.271 17:21:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.271 17:21:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.271 17:21:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.271 17:21:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.271 17:21:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.271 17:21:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.271 17:21:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.271 17:21:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:17.271 17:21:26 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:17.271 17:21:26 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:17.271 17:21:26 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:17.271 17:21:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:17.271 17:21:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.271 17:21:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:17:17.271 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:17:17.271 17:21:26 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:17.271 17:21:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.271 17:21:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:17:17.271 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:17:17.271 17:21:26 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:17.271 17:21:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:17.271 17:21:26 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.271 17:21:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.271 17:21:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:17.271 17:21:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.271 17:21:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:17:17.271 Found net devices under 0000:da:00.0: mlx_0_0 00:17:17.271 17:21:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.271 17:21:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.271 17:21:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.271 17:21:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:17.271 17:21:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.271 17:21:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:17:17.271 Found net devices under 0000:da:00.1: mlx_0_1 00:17:17.271 17:21:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.271 17:21:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:17.271 17:21:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:17.271 17:21:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:17.271 17:21:26 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:17.271 17:21:26 -- nvmf/common.sh@58 -- # uname 00:17:17.271 17:21:26 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:17.271 17:21:26 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:17.271 17:21:26 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:17.271 17:21:26 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:17.271 17:21:26 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:17.271 17:21:26 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:17.271 17:21:26 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:17.271 17:21:26 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:17.271 17:21:26 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:17.271 17:21:26 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:17.271 17:21:26 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:17.271 17:21:26 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:17.271 17:21:26 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:17.271 17:21:26 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:17.271 17:21:26 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:17.271 17:21:26 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:17.271 17:21:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:17.271 17:21:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.271 17:21:26 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:17.271 17:21:26 -- nvmf/common.sh@105 -- # continue 2 00:17:17.271 17:21:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:17.271 17:21:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.271 17:21:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.271 17:21:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:17.271 17:21:26 -- nvmf/common.sh@105 -- # continue 2 00:17:17.271 17:21:26 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:17.271 17:21:26 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:17.271 17:21:26 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:17.271 17:21:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:17.271 17:21:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:17.271 17:21:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:17.271 17:21:26 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:17.271 17:21:26 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:17.271 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:17.271 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:17:17.271 altname enp218s0f0np0 00:17:17.271 altname ens818f0np0 00:17:17.271 inet 192.168.100.8/24 scope global mlx_0_0 00:17:17.271 valid_lft forever preferred_lft forever 00:17:17.271 17:21:26 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:17.271 17:21:26 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:17.271 17:21:26 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:17.271 17:21:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:17.271 17:21:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:17.271 17:21:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:17.271 17:21:26 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:17.271 17:21:26 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:17.271 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:17.271 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:17:17.271 altname enp218s0f1np1 00:17:17.271 altname ens818f1np1 00:17:17.271 inet 192.168.100.9/24 scope global mlx_0_1 00:17:17.271 valid_lft forever preferred_lft forever 00:17:17.271 17:21:26 -- nvmf/common.sh@411 -- # return 0 00:17:17.271 17:21:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:17.271 17:21:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:17.271 17:21:26 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:17.271 17:21:26 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:17.271 17:21:26 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:17.271 17:21:26 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:17.271 17:21:26 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:17.271 17:21:26 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:17.271 17:21:26 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:17.271 17:21:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:17.271 17:21:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.271 17:21:26 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:17.271 17:21:26 -- nvmf/common.sh@105 -- # continue 2 00:17:17.271 17:21:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:17.271 17:21:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.271 17:21:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.271 17:21:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:17.271 17:21:26 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:17.271 17:21:26 -- nvmf/common.sh@105 -- # continue 2 00:17:17.271 17:21:26 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:17.271 17:21:26 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:17.271 17:21:26 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:17.271 17:21:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:17.271 17:21:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:17.271 17:21:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:17.271 17:21:26 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:17.271 17:21:26 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:17.271 17:21:26 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:17.271 17:21:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:17.271 17:21:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:17.272 17:21:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:17.272 17:21:26 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:17.272 192.168.100.9' 00:17:17.272 17:21:26 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:17.272 192.168.100.9' 00:17:17.272 17:21:26 -- nvmf/common.sh@446 -- # head -n 1 00:17:17.272 17:21:26 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:17.272 17:21:26 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:17.272 192.168.100.9' 00:17:17.272 17:21:26 -- nvmf/common.sh@447 -- # tail -n +2 00:17:17.272 17:21:26 -- nvmf/common.sh@447 -- # head -n 1 00:17:17.530 17:21:26 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:17.530 17:21:26 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:17.530 17:21:26 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:17.530 17:21:26 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:17.530 17:21:26 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:17.530 17:21:26 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:17.530 17:21:26 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:17:17.530 17:21:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:17.530 17:21:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:17.530 17:21:26 -- common/autotest_common.sh@10 -- # set +x 00:17:17.530 17:21:26 -- nvmf/common.sh@470 -- # nvmfpid=3027516 00:17:17.530 17:21:26 -- nvmf/common.sh@471 -- # waitforlisten 3027516 00:17:17.530 17:21:26 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:17.530 17:21:26 -- common/autotest_common.sh@817 -- # '[' -z 3027516 ']' 00:17:17.530 17:21:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.530 17:21:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:17.530 17:21:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.530 17:21:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:17.530 17:21:26 -- common/autotest_common.sh@10 -- # set +x 00:17:17.530 [2024-04-24 17:21:26.599098] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:17:17.531 [2024-04-24 17:21:26.599147] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.531 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.531 [2024-04-24 17:21:26.656183] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:17.531 [2024-04-24 17:21:26.727629] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.531 [2024-04-24 17:21:26.727670] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.531 [2024-04-24 17:21:26.727676] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.531 [2024-04-24 17:21:26.727681] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.531 [2024-04-24 17:21:26.727686] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.531 [2024-04-24 17:21:26.727808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.531 [2024-04-24 17:21:26.727883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:17.531 [2024-04-24 17:21:26.728371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.531 [2024-04-24 17:21:26.728371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:18.465 17:21:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:18.465 17:21:27 -- common/autotest_common.sh@850 -- # return 0 00:17:18.465 17:21:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:18.465 17:21:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:18.465 17:21:27 -- common/autotest_common.sh@10 -- # set +x 00:17:18.465 17:21:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.465 17:21:27 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:18.465 17:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.465 17:21:27 -- common/autotest_common.sh@10 -- # set +x 00:17:18.465 [2024-04-24 17:21:27.460693] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe53250/0xe57740) succeed. 00:17:18.465 [2024-04-24 17:21:27.470896] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe54840/0xe98dd0) succeed. 00:17:18.465 17:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.465 17:21:27 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:18.465 17:21:27 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:18.465 17:21:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:18.465 17:21:27 -- common/autotest_common.sh@10 -- # set +x 00:17:18.465 17:21:27 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:18.465 17:21:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:18.465 17:21:27 -- target/shutdown.sh@28 -- # cat 00:17:18.465 17:21:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:18.465 17:21:27 -- target/shutdown.sh@28 -- # cat 00:17:18.465 17:21:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:18.465 17:21:27 -- target/shutdown.sh@28 -- # cat 00:17:18.465 17:21:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:18.465 17:21:27 -- target/shutdown.sh@28 -- # cat 00:17:18.465 17:21:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:18.465 17:21:27 -- target/shutdown.sh@28 -- # cat 00:17:18.465 17:21:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:18.465 17:21:27 -- target/shutdown.sh@28 -- # cat 00:17:18.465 17:21:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:18.465 17:21:27 -- target/shutdown.sh@28 -- # cat 00:17:18.465 17:21:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:18.465 17:21:27 -- target/shutdown.sh@28 -- # cat 00:17:18.465 17:21:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:18.465 17:21:27 -- target/shutdown.sh@28 -- # cat 00:17:18.465 17:21:27 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:18.465 17:21:27 -- target/shutdown.sh@28 -- # cat 00:17:18.465 17:21:27 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:18.465 17:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.465 17:21:27 -- common/autotest_common.sh@10 -- # set +x 00:17:18.465 Malloc1 00:17:18.465 [2024-04-24 17:21:27.686104] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:18.465 Malloc2 00:17:18.723 Malloc3 00:17:18.723 Malloc4 00:17:18.723 Malloc5 00:17:18.723 Malloc6 00:17:18.723 Malloc7 00:17:18.982 Malloc8 00:17:18.982 Malloc9 00:17:18.982 Malloc10 00:17:18.982 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.982 17:21:28 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:18.982 17:21:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:18.982 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:18.982 17:21:28 -- target/shutdown.sh@78 -- # perfpid=3027587 00:17:18.982 17:21:28 -- target/shutdown.sh@79 -- # waitforlisten 3027587 /var/tmp/bdevperf.sock 00:17:18.982 17:21:28 -- common/autotest_common.sh@817 -- # '[' -z 3027587 ']' 00:17:18.982 17:21:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.982 17:21:28 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:17:18.982 17:21:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:18.982 17:21:28 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:18.982 17:21:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.982 17:21:28 -- nvmf/common.sh@521 -- # config=() 00:17:18.982 17:21:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:18.982 17:21:28 -- nvmf/common.sh@521 -- # local subsystem config 00:17:18.982 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:18.982 17:21:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:18.982 17:21:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:18.982 { 00:17:18.982 "params": { 00:17:18.982 "name": "Nvme$subsystem", 00:17:18.982 "trtype": "$TEST_TRANSPORT", 00:17:18.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.982 "adrfam": "ipv4", 00:17:18.982 "trsvcid": "$NVMF_PORT", 00:17:18.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.982 "hdgst": ${hdgst:-false}, 00:17:18.982 "ddgst": ${ddgst:-false} 00:17:18.982 }, 00:17:18.982 "method": "bdev_nvme_attach_controller" 00:17:18.982 } 00:17:18.982 EOF 00:17:18.982 )") 00:17:18.982 17:21:28 -- nvmf/common.sh@543 -- # cat 00:17:18.982 17:21:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:18.982 17:21:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:18.982 { 00:17:18.982 "params": { 00:17:18.982 "name": "Nvme$subsystem", 00:17:18.982 "trtype": "$TEST_TRANSPORT", 00:17:18.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.982 "adrfam": "ipv4", 00:17:18.982 "trsvcid": "$NVMF_PORT", 00:17:18.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.982 "hdgst": ${hdgst:-false}, 00:17:18.982 "ddgst": ${ddgst:-false} 00:17:18.982 }, 00:17:18.982 "method": "bdev_nvme_attach_controller" 00:17:18.982 } 00:17:18.982 EOF 00:17:18.982 )") 00:17:18.982 17:21:28 -- nvmf/common.sh@543 -- # cat 00:17:18.982 17:21:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:18.982 17:21:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:18.982 { 00:17:18.982 "params": { 00:17:18.982 "name": "Nvme$subsystem", 00:17:18.982 "trtype": "$TEST_TRANSPORT", 00:17:18.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.982 "adrfam": "ipv4", 00:17:18.982 "trsvcid": "$NVMF_PORT", 00:17:18.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.983 "hdgst": ${hdgst:-false}, 00:17:18.983 "ddgst": ${ddgst:-false} 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 } 00:17:18.983 EOF 00:17:18.983 )") 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # cat 00:17:18.983 17:21:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:18.983 { 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme$subsystem", 00:17:18.983 "trtype": "$TEST_TRANSPORT", 00:17:18.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "$NVMF_PORT", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.983 "hdgst": ${hdgst:-false}, 00:17:18.983 "ddgst": ${ddgst:-false} 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 } 00:17:18.983 EOF 00:17:18.983 )") 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # cat 00:17:18.983 17:21:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:18.983 { 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme$subsystem", 00:17:18.983 "trtype": "$TEST_TRANSPORT", 00:17:18.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "$NVMF_PORT", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.983 "hdgst": ${hdgst:-false}, 00:17:18.983 "ddgst": ${ddgst:-false} 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 } 00:17:18.983 EOF 00:17:18.983 )") 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # cat 00:17:18.983 17:21:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:18.983 { 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme$subsystem", 00:17:18.983 "trtype": "$TEST_TRANSPORT", 00:17:18.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "$NVMF_PORT", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.983 "hdgst": ${hdgst:-false}, 00:17:18.983 "ddgst": ${ddgst:-false} 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 } 00:17:18.983 EOF 00:17:18.983 )") 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # cat 00:17:18.983 [2024-04-24 17:21:28.160380] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:17:18.983 [2024-04-24 17:21:28.160427] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:18.983 17:21:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:18.983 { 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme$subsystem", 00:17:18.983 "trtype": "$TEST_TRANSPORT", 00:17:18.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "$NVMF_PORT", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.983 "hdgst": ${hdgst:-false}, 00:17:18.983 "ddgst": ${ddgst:-false} 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 } 00:17:18.983 EOF 00:17:18.983 )") 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # cat 00:17:18.983 17:21:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:18.983 { 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme$subsystem", 00:17:18.983 "trtype": "$TEST_TRANSPORT", 00:17:18.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "$NVMF_PORT", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.983 "hdgst": ${hdgst:-false}, 00:17:18.983 "ddgst": ${ddgst:-false} 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 } 00:17:18.983 EOF 00:17:18.983 )") 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # cat 00:17:18.983 17:21:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:18.983 { 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme$subsystem", 00:17:18.983 "trtype": "$TEST_TRANSPORT", 00:17:18.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "$NVMF_PORT", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.983 "hdgst": ${hdgst:-false}, 00:17:18.983 "ddgst": ${ddgst:-false} 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 } 00:17:18.983 EOF 00:17:18.983 )") 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # cat 00:17:18.983 17:21:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:18.983 { 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme$subsystem", 00:17:18.983 "trtype": "$TEST_TRANSPORT", 00:17:18.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "$NVMF_PORT", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.983 "hdgst": ${hdgst:-false}, 00:17:18.983 "ddgst": ${ddgst:-false} 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 } 00:17:18.983 EOF 00:17:18.983 )") 00:17:18.983 17:21:28 -- nvmf/common.sh@543 -- # cat 00:17:18.983 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.983 17:21:28 -- nvmf/common.sh@545 -- # jq . 00:17:18.983 17:21:28 -- nvmf/common.sh@546 -- # IFS=, 00:17:18.983 17:21:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme1", 00:17:18.983 "trtype": "rdma", 00:17:18.983 "traddr": "192.168.100.8", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "4420", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.983 "hdgst": false, 00:17:18.983 "ddgst": false 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 },{ 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme2", 00:17:18.983 "trtype": "rdma", 00:17:18.983 "traddr": "192.168.100.8", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "4420", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:18.983 "hdgst": false, 00:17:18.983 "ddgst": false 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 },{ 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme3", 00:17:18.983 "trtype": "rdma", 00:17:18.983 "traddr": "192.168.100.8", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "4420", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:18.983 "hdgst": false, 00:17:18.983 "ddgst": false 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 },{ 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme4", 00:17:18.983 "trtype": "rdma", 00:17:18.983 "traddr": "192.168.100.8", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "4420", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:18.983 "hdgst": false, 00:17:18.983 "ddgst": false 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 },{ 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme5", 00:17:18.983 "trtype": "rdma", 00:17:18.983 "traddr": "192.168.100.8", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "4420", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:18.983 "hdgst": false, 00:17:18.983 "ddgst": false 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 },{ 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme6", 00:17:18.983 "trtype": "rdma", 00:17:18.983 "traddr": "192.168.100.8", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "4420", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:18.983 "hdgst": false, 00:17:18.983 "ddgst": false 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 },{ 00:17:18.983 "params": { 00:17:18.983 "name": "Nvme7", 00:17:18.983 "trtype": "rdma", 00:17:18.983 "traddr": "192.168.100.8", 00:17:18.983 "adrfam": "ipv4", 00:17:18.983 "trsvcid": "4420", 00:17:18.983 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:18.983 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:18.983 "hdgst": false, 00:17:18.983 "ddgst": false 00:17:18.983 }, 00:17:18.983 "method": "bdev_nvme_attach_controller" 00:17:18.983 },{ 00:17:18.984 "params": { 00:17:18.984 "name": "Nvme8", 00:17:18.984 "trtype": "rdma", 00:17:18.984 "traddr": "192.168.100.8", 00:17:18.984 "adrfam": "ipv4", 00:17:18.984 "trsvcid": "4420", 00:17:18.984 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:18.984 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:18.984 "hdgst": false, 00:17:18.984 "ddgst": false 00:17:18.984 }, 00:17:18.984 "method": "bdev_nvme_attach_controller" 00:17:18.984 },{ 00:17:18.984 "params": { 00:17:18.984 "name": "Nvme9", 00:17:18.984 "trtype": "rdma", 00:17:18.984 "traddr": "192.168.100.8", 00:17:18.984 "adrfam": "ipv4", 00:17:18.984 "trsvcid": "4420", 00:17:18.984 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:18.984 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:18.984 "hdgst": false, 00:17:18.984 "ddgst": false 00:17:18.984 }, 00:17:18.984 "method": "bdev_nvme_attach_controller" 00:17:18.984 },{ 00:17:18.984 "params": { 00:17:18.984 "name": "Nvme10", 00:17:18.984 "trtype": "rdma", 00:17:18.984 "traddr": "192.168.100.8", 00:17:18.984 "adrfam": "ipv4", 00:17:18.984 "trsvcid": "4420", 00:17:18.984 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:18.984 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:18.984 "hdgst": false, 00:17:18.984 "ddgst": false 00:17:18.984 }, 00:17:18.984 "method": "bdev_nvme_attach_controller" 00:17:18.984 }' 00:17:18.984 [2024-04-24 17:21:28.217161] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.242 [2024-04-24 17:21:28.288703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.175 17:21:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:20.175 17:21:29 -- common/autotest_common.sh@850 -- # return 0 00:17:20.175 17:21:29 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:20.175 17:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.175 17:21:29 -- common/autotest_common.sh@10 -- # set +x 00:17:20.175 17:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.175 17:21:29 -- target/shutdown.sh@83 -- # kill -9 3027587 00:17:20.175 17:21:29 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:17:20.175 17:21:29 -- target/shutdown.sh@87 -- # sleep 1 00:17:21.108 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3027587 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:17:21.108 17:21:30 -- target/shutdown.sh@88 -- # kill -0 3027516 00:17:21.108 17:21:30 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:21.108 17:21:30 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:21.108 17:21:30 -- nvmf/common.sh@521 -- # config=() 00:17:21.108 17:21:30 -- nvmf/common.sh@521 -- # local subsystem config 00:17:21.108 17:21:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:21.108 17:21:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:21.108 { 00:17:21.108 "params": { 00:17:21.108 "name": "Nvme$subsystem", 00:17:21.108 "trtype": "$TEST_TRANSPORT", 00:17:21.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.108 "adrfam": "ipv4", 00:17:21.108 "trsvcid": "$NVMF_PORT", 00:17:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.109 "hdgst": ${hdgst:-false}, 00:17:21.109 "ddgst": ${ddgst:-false} 00:17:21.109 }, 00:17:21.109 "method": "bdev_nvme_attach_controller" 00:17:21.109 } 00:17:21.109 EOF 00:17:21.109 )") 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # cat 00:17:21.109 17:21:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:21.109 { 00:17:21.109 "params": { 00:17:21.109 "name": "Nvme$subsystem", 00:17:21.109 "trtype": "$TEST_TRANSPORT", 00:17:21.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.109 "adrfam": "ipv4", 00:17:21.109 "trsvcid": "$NVMF_PORT", 00:17:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.109 "hdgst": ${hdgst:-false}, 00:17:21.109 "ddgst": ${ddgst:-false} 00:17:21.109 }, 00:17:21.109 "method": "bdev_nvme_attach_controller" 00:17:21.109 } 00:17:21.109 EOF 00:17:21.109 )") 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # cat 00:17:21.109 17:21:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:21.109 { 00:17:21.109 "params": { 00:17:21.109 "name": "Nvme$subsystem", 00:17:21.109 "trtype": "$TEST_TRANSPORT", 00:17:21.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.109 "adrfam": "ipv4", 00:17:21.109 "trsvcid": "$NVMF_PORT", 00:17:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.109 "hdgst": ${hdgst:-false}, 00:17:21.109 "ddgst": ${ddgst:-false} 00:17:21.109 }, 00:17:21.109 "method": "bdev_nvme_attach_controller" 00:17:21.109 } 00:17:21.109 EOF 00:17:21.109 )") 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # cat 00:17:21.109 17:21:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:21.109 { 00:17:21.109 "params": { 00:17:21.109 "name": "Nvme$subsystem", 00:17:21.109 "trtype": "$TEST_TRANSPORT", 00:17:21.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.109 "adrfam": "ipv4", 00:17:21.109 "trsvcid": "$NVMF_PORT", 00:17:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.109 "hdgst": ${hdgst:-false}, 00:17:21.109 "ddgst": ${ddgst:-false} 00:17:21.109 }, 00:17:21.109 "method": "bdev_nvme_attach_controller" 00:17:21.109 } 00:17:21.109 EOF 00:17:21.109 )") 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # cat 00:17:21.109 17:21:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:21.109 { 00:17:21.109 "params": { 00:17:21.109 "name": "Nvme$subsystem", 00:17:21.109 "trtype": "$TEST_TRANSPORT", 00:17:21.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.109 "adrfam": "ipv4", 00:17:21.109 "trsvcid": "$NVMF_PORT", 00:17:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.109 "hdgst": ${hdgst:-false}, 00:17:21.109 "ddgst": ${ddgst:-false} 00:17:21.109 }, 00:17:21.109 "method": "bdev_nvme_attach_controller" 00:17:21.109 } 00:17:21.109 EOF 00:17:21.109 )") 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # cat 00:17:21.109 17:21:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:21.109 { 00:17:21.109 "params": { 00:17:21.109 "name": "Nvme$subsystem", 00:17:21.109 "trtype": "$TEST_TRANSPORT", 00:17:21.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.109 "adrfam": "ipv4", 00:17:21.109 "trsvcid": "$NVMF_PORT", 00:17:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.109 "hdgst": ${hdgst:-false}, 00:17:21.109 "ddgst": ${ddgst:-false} 00:17:21.109 }, 00:17:21.109 "method": "bdev_nvme_attach_controller" 00:17:21.109 } 00:17:21.109 EOF 00:17:21.109 )") 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # cat 00:17:21.109 [2024-04-24 17:21:30.197004] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:17:21.109 [2024-04-24 17:21:30.197055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027642 ] 00:17:21.109 17:21:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:21.109 { 00:17:21.109 "params": { 00:17:21.109 "name": "Nvme$subsystem", 00:17:21.109 "trtype": "$TEST_TRANSPORT", 00:17:21.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.109 "adrfam": "ipv4", 00:17:21.109 "trsvcid": "$NVMF_PORT", 00:17:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.109 "hdgst": ${hdgst:-false}, 00:17:21.109 "ddgst": ${ddgst:-false} 00:17:21.109 }, 00:17:21.109 "method": "bdev_nvme_attach_controller" 00:17:21.109 } 00:17:21.109 EOF 00:17:21.109 )") 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # cat 00:17:21.109 17:21:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:21.109 { 00:17:21.109 "params": { 00:17:21.109 "name": "Nvme$subsystem", 00:17:21.109 "trtype": "$TEST_TRANSPORT", 00:17:21.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.109 "adrfam": "ipv4", 00:17:21.109 "trsvcid": "$NVMF_PORT", 00:17:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.109 "hdgst": ${hdgst:-false}, 00:17:21.109 "ddgst": ${ddgst:-false} 00:17:21.109 }, 00:17:21.109 "method": "bdev_nvme_attach_controller" 00:17:21.109 } 00:17:21.109 EOF 00:17:21.109 )") 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # cat 00:17:21.109 17:21:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:21.109 { 00:17:21.109 "params": { 00:17:21.109 "name": "Nvme$subsystem", 00:17:21.109 "trtype": "$TEST_TRANSPORT", 00:17:21.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.109 "adrfam": "ipv4", 00:17:21.109 "trsvcid": "$NVMF_PORT", 00:17:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.109 "hdgst": ${hdgst:-false}, 00:17:21.109 "ddgst": ${ddgst:-false} 00:17:21.109 }, 00:17:21.109 "method": "bdev_nvme_attach_controller" 00:17:21.109 } 00:17:21.109 EOF 00:17:21.109 )") 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # cat 00:17:21.109 17:21:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:21.109 { 00:17:21.109 "params": { 00:17:21.109 "name": "Nvme$subsystem", 00:17:21.109 "trtype": "$TEST_TRANSPORT", 00:17:21.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.109 "adrfam": "ipv4", 00:17:21.109 "trsvcid": "$NVMF_PORT", 00:17:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.109 "hdgst": ${hdgst:-false}, 00:17:21.109 "ddgst": ${ddgst:-false} 00:17:21.109 }, 00:17:21.109 "method": "bdev_nvme_attach_controller" 00:17:21.109 } 00:17:21.109 EOF 00:17:21.109 )") 00:17:21.109 17:21:30 -- nvmf/common.sh@543 -- # cat 00:17:21.109 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.109 17:21:30 -- nvmf/common.sh@545 -- # jq . 00:17:21.109 17:21:30 -- nvmf/common.sh@546 -- # IFS=, 00:17:21.109 17:21:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:21.109 "params": { 00:17:21.109 "name": "Nvme1", 00:17:21.109 "trtype": "rdma", 00:17:21.109 "traddr": "192.168.100.8", 00:17:21.109 "adrfam": "ipv4", 00:17:21.109 "trsvcid": "4420", 00:17:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.109 "hdgst": false, 00:17:21.109 "ddgst": false 00:17:21.109 }, 00:17:21.109 "method": "bdev_nvme_attach_controller" 00:17:21.109 },{ 00:17:21.109 "params": { 00:17:21.109 "name": "Nvme2", 00:17:21.109 "trtype": "rdma", 00:17:21.109 "traddr": "192.168.100.8", 00:17:21.109 "adrfam": "ipv4", 00:17:21.109 "trsvcid": "4420", 00:17:21.109 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:21.109 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:21.110 "hdgst": false, 00:17:21.110 "ddgst": false 00:17:21.110 }, 00:17:21.110 "method": "bdev_nvme_attach_controller" 00:17:21.110 },{ 00:17:21.110 "params": { 00:17:21.110 "name": "Nvme3", 00:17:21.110 "trtype": "rdma", 00:17:21.110 "traddr": "192.168.100.8", 00:17:21.110 "adrfam": "ipv4", 00:17:21.110 "trsvcid": "4420", 00:17:21.110 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:21.110 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:21.110 "hdgst": false, 00:17:21.110 "ddgst": false 00:17:21.110 }, 00:17:21.110 "method": "bdev_nvme_attach_controller" 00:17:21.110 },{ 00:17:21.110 "params": { 00:17:21.110 "name": "Nvme4", 00:17:21.110 "trtype": "rdma", 00:17:21.110 "traddr": "192.168.100.8", 00:17:21.110 "adrfam": "ipv4", 00:17:21.110 "trsvcid": "4420", 00:17:21.110 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:21.110 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:21.110 "hdgst": false, 00:17:21.110 "ddgst": false 00:17:21.110 }, 00:17:21.110 "method": "bdev_nvme_attach_controller" 00:17:21.110 },{ 00:17:21.110 "params": { 00:17:21.110 "name": "Nvme5", 00:17:21.110 "trtype": "rdma", 00:17:21.110 "traddr": "192.168.100.8", 00:17:21.110 "adrfam": "ipv4", 00:17:21.110 "trsvcid": "4420", 00:17:21.110 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:21.110 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:21.110 "hdgst": false, 00:17:21.110 "ddgst": false 00:17:21.110 }, 00:17:21.110 "method": "bdev_nvme_attach_controller" 00:17:21.110 },{ 00:17:21.110 "params": { 00:17:21.110 "name": "Nvme6", 00:17:21.110 "trtype": "rdma", 00:17:21.110 "traddr": "192.168.100.8", 00:17:21.110 "adrfam": "ipv4", 00:17:21.110 "trsvcid": "4420", 00:17:21.110 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:21.110 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:21.110 "hdgst": false, 00:17:21.110 "ddgst": false 00:17:21.110 }, 00:17:21.110 "method": "bdev_nvme_attach_controller" 00:17:21.110 },{ 00:17:21.110 "params": { 00:17:21.110 "name": "Nvme7", 00:17:21.110 "trtype": "rdma", 00:17:21.110 "traddr": "192.168.100.8", 00:17:21.110 "adrfam": "ipv4", 00:17:21.110 "trsvcid": "4420", 00:17:21.110 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:21.110 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:21.110 "hdgst": false, 00:17:21.110 "ddgst": false 00:17:21.110 }, 00:17:21.110 "method": "bdev_nvme_attach_controller" 00:17:21.110 },{ 00:17:21.110 "params": { 00:17:21.110 "name": "Nvme8", 00:17:21.110 "trtype": "rdma", 00:17:21.110 "traddr": "192.168.100.8", 00:17:21.110 "adrfam": "ipv4", 00:17:21.110 "trsvcid": "4420", 00:17:21.110 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:21.110 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:21.110 "hdgst": false, 00:17:21.110 "ddgst": false 00:17:21.110 }, 00:17:21.110 "method": "bdev_nvme_attach_controller" 00:17:21.110 },{ 00:17:21.110 "params": { 00:17:21.110 "name": "Nvme9", 00:17:21.110 "trtype": "rdma", 00:17:21.110 "traddr": "192.168.100.8", 00:17:21.110 "adrfam": "ipv4", 00:17:21.110 "trsvcid": "4420", 00:17:21.110 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:21.110 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:21.110 "hdgst": false, 00:17:21.110 "ddgst": false 00:17:21.110 }, 00:17:21.110 "method": "bdev_nvme_attach_controller" 00:17:21.110 },{ 00:17:21.110 "params": { 00:17:21.110 "name": "Nvme10", 00:17:21.110 "trtype": "rdma", 00:17:21.110 "traddr": "192.168.100.8", 00:17:21.110 "adrfam": "ipv4", 00:17:21.110 "trsvcid": "4420", 00:17:21.110 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:21.110 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:21.110 "hdgst": false, 00:17:21.110 "ddgst": false 00:17:21.110 }, 00:17:21.110 "method": "bdev_nvme_attach_controller" 00:17:21.110 }' 00:17:21.110 [2024-04-24 17:21:30.254911] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.110 [2024-04-24 17:21:30.326347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.044 Running I/O for 1 seconds... 00:17:23.418 00:17:23.418 Latency(us) 00:17:23.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.418 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:23.418 Verification LBA range: start 0x0 length 0x400 00:17:23.418 Nvme1n1 : 1.17 379.28 23.70 0.00 0.00 166636.32 8613.30 237677.23 00:17:23.418 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:23.418 Verification LBA range: start 0x0 length 0x400 00:17:23.418 Nvme2n1 : 1.17 382.23 23.89 0.00 0.00 163277.15 9175.04 167772.16 00:17:23.418 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:23.418 Verification LBA range: start 0x0 length 0x400 00:17:23.418 Nvme3n1 : 1.17 381.85 23.87 0.00 0.00 160857.86 9549.53 160781.65 00:17:23.418 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:23.418 Verification LBA range: start 0x0 length 0x400 00:17:23.418 Nvme4n1 : 1.17 390.84 24.43 0.00 0.00 154948.21 5710.99 149796.57 00:17:23.418 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:23.418 Verification LBA range: start 0x0 length 0x400 00:17:23.418 Nvme5n1 : 1.18 381.01 23.81 0.00 0.00 157079.37 10423.34 142806.06 00:17:23.418 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:23.418 Verification LBA range: start 0x0 length 0x400 00:17:23.418 Nvme6n1 : 1.18 380.63 23.79 0.00 0.00 154673.39 10797.84 135815.56 00:17:23.418 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:23.418 Verification LBA range: start 0x0 length 0x400 00:17:23.418 Nvme7n1 : 1.18 380.30 23.77 0.00 0.00 152337.14 10985.08 127826.41 00:17:23.418 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:23.418 Verification LBA range: start 0x0 length 0x400 00:17:23.418 Nvme8n1 : 1.18 379.92 23.75 0.00 0.00 150491.64 11234.74 122833.19 00:17:23.418 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:23.418 Verification LBA range: start 0x0 length 0x400 00:17:23.418 Nvme9n1 : 1.18 378.77 23.67 0.00 0.00 148959.85 2683.86 111848.11 00:17:23.418 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:23.418 Verification LBA range: start 0x0 length 0x400 00:17:23.418 Nvme10n1 : 1.17 328.37 20.52 0.00 0.00 169606.58 8113.98 182751.82 00:17:23.418 =================================================================================================================== 00:17:23.418 Total : 3763.21 235.20 0.00 0.00 157701.95 2683.86 237677.23 00:17:23.675 17:21:32 -- target/shutdown.sh@94 -- # stoptarget 00:17:23.675 17:21:32 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:23.675 17:21:32 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:23.675 17:21:32 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:23.675 17:21:32 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:23.675 17:21:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:23.675 17:21:32 -- nvmf/common.sh@117 -- # sync 00:17:23.675 17:21:32 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:23.675 17:21:32 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:23.675 17:21:32 -- nvmf/common.sh@120 -- # set +e 00:17:23.675 17:21:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.675 17:21:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:23.675 rmmod nvme_rdma 00:17:23.675 rmmod nvme_fabrics 00:17:23.676 17:21:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.676 17:21:32 -- nvmf/common.sh@124 -- # set -e 00:17:23.676 17:21:32 -- nvmf/common.sh@125 -- # return 0 00:17:23.676 17:21:32 -- nvmf/common.sh@478 -- # '[' -n 3027516 ']' 00:17:23.676 17:21:32 -- nvmf/common.sh@479 -- # killprocess 3027516 00:17:23.676 17:21:32 -- common/autotest_common.sh@936 -- # '[' -z 3027516 ']' 00:17:23.676 17:21:32 -- common/autotest_common.sh@940 -- # kill -0 3027516 00:17:23.676 17:21:32 -- common/autotest_common.sh@941 -- # uname 00:17:23.676 17:21:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:23.676 17:21:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3027516 00:17:23.676 17:21:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:23.676 17:21:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:23.676 17:21:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3027516' 00:17:23.676 killing process with pid 3027516 00:17:23.676 17:21:32 -- common/autotest_common.sh@955 -- # kill 3027516 00:17:23.676 17:21:32 -- common/autotest_common.sh@960 -- # wait 3027516 00:17:24.241 17:21:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:24.241 17:21:33 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:24.241 00:17:24.241 real 0m12.251s 00:17:24.241 user 0m30.492s 00:17:24.241 sys 0m5.060s 00:17:24.241 17:21:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:24.241 17:21:33 -- common/autotest_common.sh@10 -- # set +x 00:17:24.241 ************************************ 00:17:24.241 END TEST nvmf_shutdown_tc1 00:17:24.241 ************************************ 00:17:24.241 17:21:33 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:17:24.241 17:21:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:24.241 17:21:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:24.241 17:21:33 -- common/autotest_common.sh@10 -- # set +x 00:17:24.241 ************************************ 00:17:24.241 START TEST nvmf_shutdown_tc2 00:17:24.241 ************************************ 00:17:24.241 17:21:33 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:17:24.241 17:21:33 -- target/shutdown.sh@99 -- # starttarget 00:17:24.241 17:21:33 -- target/shutdown.sh@15 -- # nvmftestinit 00:17:24.241 17:21:33 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:24.241 17:21:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.241 17:21:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:24.241 17:21:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:24.241 17:21:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:24.241 17:21:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.241 17:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.241 17:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.241 17:21:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:24.241 17:21:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:24.241 17:21:33 -- common/autotest_common.sh@10 -- # set +x 00:17:24.241 17:21:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:24.241 17:21:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:24.241 17:21:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:24.241 17:21:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:24.241 17:21:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:24.241 17:21:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:24.241 17:21:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:24.241 17:21:33 -- nvmf/common.sh@295 -- # net_devs=() 00:17:24.241 17:21:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:24.241 17:21:33 -- nvmf/common.sh@296 -- # e810=() 00:17:24.241 17:21:33 -- nvmf/common.sh@296 -- # local -ga e810 00:17:24.241 17:21:33 -- nvmf/common.sh@297 -- # x722=() 00:17:24.241 17:21:33 -- nvmf/common.sh@297 -- # local -ga x722 00:17:24.241 17:21:33 -- nvmf/common.sh@298 -- # mlx=() 00:17:24.241 17:21:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:24.241 17:21:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.241 17:21:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.241 17:21:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.241 17:21:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.241 17:21:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.241 17:21:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.241 17:21:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.241 17:21:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.241 17:21:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.241 17:21:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.241 17:21:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.241 17:21:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:24.241 17:21:33 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:24.241 17:21:33 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:24.241 17:21:33 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:24.241 17:21:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:24.241 17:21:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.241 17:21:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:17:24.241 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:17:24.241 17:21:33 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:24.241 17:21:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.241 17:21:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:17:24.241 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:17:24.241 17:21:33 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:24.241 17:21:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:24.241 17:21:33 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.241 17:21:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.241 17:21:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:24.241 17:21:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.241 17:21:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:17:24.241 Found net devices under 0000:da:00.0: mlx_0_0 00:17:24.241 17:21:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.241 17:21:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.241 17:21:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.241 17:21:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:24.241 17:21:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.241 17:21:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:17:24.241 Found net devices under 0000:da:00.1: mlx_0_1 00:17:24.241 17:21:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.241 17:21:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:24.241 17:21:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:24.241 17:21:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:24.241 17:21:33 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:24.241 17:21:33 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:24.241 17:21:33 -- nvmf/common.sh@58 -- # uname 00:17:24.241 17:21:33 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:24.241 17:21:33 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:24.241 17:21:33 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:24.241 17:21:33 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:24.241 17:21:33 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:24.241 17:21:33 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:24.241 17:21:33 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:24.241 17:21:33 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:24.241 17:21:33 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:24.242 17:21:33 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:24.242 17:21:33 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:24.242 17:21:33 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:24.242 17:21:33 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:24.242 17:21:33 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:24.242 17:21:33 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:24.500 17:21:33 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:24.500 17:21:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:24.500 17:21:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.500 17:21:33 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:24.500 17:21:33 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:24.500 17:21:33 -- nvmf/common.sh@105 -- # continue 2 00:17:24.500 17:21:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:24.500 17:21:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.500 17:21:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:24.500 17:21:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.500 17:21:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:24.500 17:21:33 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:24.500 17:21:33 -- nvmf/common.sh@105 -- # continue 2 00:17:24.500 17:21:33 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:24.500 17:21:33 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:24.500 17:21:33 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:24.500 17:21:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:24.500 17:21:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:24.500 17:21:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:24.500 17:21:33 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:24.500 17:21:33 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:24.500 17:21:33 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:24.500 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:24.500 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:17:24.500 altname enp218s0f0np0 00:17:24.500 altname ens818f0np0 00:17:24.500 inet 192.168.100.8/24 scope global mlx_0_0 00:17:24.500 valid_lft forever preferred_lft forever 00:17:24.500 17:21:33 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:24.500 17:21:33 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:24.500 17:21:33 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:24.500 17:21:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:24.500 17:21:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:24.500 17:21:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:24.500 17:21:33 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:24.500 17:21:33 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:24.500 17:21:33 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:24.500 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:24.500 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:17:24.500 altname enp218s0f1np1 00:17:24.500 altname ens818f1np1 00:17:24.500 inet 192.168.100.9/24 scope global mlx_0_1 00:17:24.500 valid_lft forever preferred_lft forever 00:17:24.500 17:21:33 -- nvmf/common.sh@411 -- # return 0 00:17:24.500 17:21:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:24.500 17:21:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:24.500 17:21:33 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:24.500 17:21:33 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:24.500 17:21:33 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:24.500 17:21:33 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:24.500 17:21:33 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:24.500 17:21:33 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:24.500 17:21:33 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:24.500 17:21:33 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:24.500 17:21:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:24.500 17:21:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.500 17:21:33 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:24.500 17:21:33 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:24.500 17:21:33 -- nvmf/common.sh@105 -- # continue 2 00:17:24.500 17:21:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:24.500 17:21:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.500 17:21:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:24.500 17:21:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.500 17:21:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:24.500 17:21:33 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:24.500 17:21:33 -- nvmf/common.sh@105 -- # continue 2 00:17:24.500 17:21:33 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:24.500 17:21:33 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:24.500 17:21:33 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:24.500 17:21:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:24.500 17:21:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:24.500 17:21:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:24.500 17:21:33 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:24.500 17:21:33 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:24.500 17:21:33 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:24.500 17:21:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:24.500 17:21:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:24.500 17:21:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:24.500 17:21:33 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:24.500 192.168.100.9' 00:17:24.500 17:21:33 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:24.500 192.168.100.9' 00:17:24.500 17:21:33 -- nvmf/common.sh@446 -- # head -n 1 00:17:24.500 17:21:33 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:24.500 17:21:33 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:24.500 192.168.100.9' 00:17:24.500 17:21:33 -- nvmf/common.sh@447 -- # tail -n +2 00:17:24.500 17:21:33 -- nvmf/common.sh@447 -- # head -n 1 00:17:24.500 17:21:33 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:24.500 17:21:33 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:24.500 17:21:33 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:24.500 17:21:33 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:24.500 17:21:33 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:24.500 17:21:33 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:24.500 17:21:33 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:17:24.500 17:21:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:24.500 17:21:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:24.500 17:21:33 -- common/autotest_common.sh@10 -- # set +x 00:17:24.500 17:21:33 -- nvmf/common.sh@470 -- # nvmfpid=3027800 00:17:24.500 17:21:33 -- nvmf/common.sh@471 -- # waitforlisten 3027800 00:17:24.500 17:21:33 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:24.500 17:21:33 -- common/autotest_common.sh@817 -- # '[' -z 3027800 ']' 00:17:24.500 17:21:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.500 17:21:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:24.500 17:21:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.500 17:21:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:24.500 17:21:33 -- common/autotest_common.sh@10 -- # set +x 00:17:24.500 [2024-04-24 17:21:33.676031] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:17:24.500 [2024-04-24 17:21:33.676080] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.500 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.500 [2024-04-24 17:21:33.733192] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.757 [2024-04-24 17:21:33.807396] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.757 [2024-04-24 17:21:33.807439] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.757 [2024-04-24 17:21:33.807446] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.757 [2024-04-24 17:21:33.807452] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.757 [2024-04-24 17:21:33.807457] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.757 [2024-04-24 17:21:33.807560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.757 [2024-04-24 17:21:33.807667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.757 [2024-04-24 17:21:33.807775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.757 [2024-04-24 17:21:33.807776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:25.320 17:21:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:25.320 17:21:34 -- common/autotest_common.sh@850 -- # return 0 00:17:25.320 17:21:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:25.320 17:21:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:25.320 17:21:34 -- common/autotest_common.sh@10 -- # set +x 00:17:25.320 17:21:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.320 17:21:34 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:25.320 17:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.320 17:21:34 -- common/autotest_common.sh@10 -- # set +x 00:17:25.320 [2024-04-24 17:21:34.538977] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22db250/0x22df740) succeed. 00:17:25.320 [2024-04-24 17:21:34.549104] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22dc840/0x2320dd0) succeed. 00:17:25.577 17:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.577 17:21:34 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:25.577 17:21:34 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:25.577 17:21:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:25.577 17:21:34 -- common/autotest_common.sh@10 -- # set +x 00:17:25.577 17:21:34 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:25.577 17:21:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.577 17:21:34 -- target/shutdown.sh@28 -- # cat 00:17:25.577 17:21:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.577 17:21:34 -- target/shutdown.sh@28 -- # cat 00:17:25.577 17:21:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.578 17:21:34 -- target/shutdown.sh@28 -- # cat 00:17:25.578 17:21:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.578 17:21:34 -- target/shutdown.sh@28 -- # cat 00:17:25.578 17:21:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.578 17:21:34 -- target/shutdown.sh@28 -- # cat 00:17:25.578 17:21:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.578 17:21:34 -- target/shutdown.sh@28 -- # cat 00:17:25.578 17:21:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.578 17:21:34 -- target/shutdown.sh@28 -- # cat 00:17:25.578 17:21:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.578 17:21:34 -- target/shutdown.sh@28 -- # cat 00:17:25.578 17:21:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.578 17:21:34 -- target/shutdown.sh@28 -- # cat 00:17:25.578 17:21:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.578 17:21:34 -- target/shutdown.sh@28 -- # cat 00:17:25.578 17:21:34 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:25.578 17:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.578 17:21:34 -- common/autotest_common.sh@10 -- # set +x 00:17:25.578 Malloc1 00:17:25.578 [2024-04-24 17:21:34.757286] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:25.578 Malloc2 00:17:25.578 Malloc3 00:17:25.835 Malloc4 00:17:25.835 Malloc5 00:17:25.835 Malloc6 00:17:25.835 Malloc7 00:17:25.835 Malloc8 00:17:25.835 Malloc9 00:17:26.093 Malloc10 00:17:26.093 17:21:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.093 17:21:35 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:26.093 17:21:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:26.093 17:21:35 -- common/autotest_common.sh@10 -- # set +x 00:17:26.093 17:21:35 -- target/shutdown.sh@103 -- # perfpid=3027869 00:17:26.093 17:21:35 -- target/shutdown.sh@104 -- # waitforlisten 3027869 /var/tmp/bdevperf.sock 00:17:26.093 17:21:35 -- common/autotest_common.sh@817 -- # '[' -z 3027869 ']' 00:17:26.093 17:21:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.093 17:21:35 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:26.093 17:21:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:26.093 17:21:35 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:26.093 17:21:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.093 17:21:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:26.093 17:21:35 -- nvmf/common.sh@521 -- # config=() 00:17:26.093 17:21:35 -- common/autotest_common.sh@10 -- # set +x 00:17:26.093 17:21:35 -- nvmf/common.sh@521 -- # local subsystem config 00:17:26.093 17:21:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.094 { 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme$subsystem", 00:17:26.094 "trtype": "$TEST_TRANSPORT", 00:17:26.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "$NVMF_PORT", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.094 "hdgst": ${hdgst:-false}, 00:17:26.094 "ddgst": ${ddgst:-false} 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 } 00:17:26.094 EOF 00:17:26.094 )") 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # cat 00:17:26.094 17:21:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.094 { 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme$subsystem", 00:17:26.094 "trtype": "$TEST_TRANSPORT", 00:17:26.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "$NVMF_PORT", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.094 "hdgst": ${hdgst:-false}, 00:17:26.094 "ddgst": ${ddgst:-false} 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 } 00:17:26.094 EOF 00:17:26.094 )") 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # cat 00:17:26.094 17:21:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.094 { 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme$subsystem", 00:17:26.094 "trtype": "$TEST_TRANSPORT", 00:17:26.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "$NVMF_PORT", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.094 "hdgst": ${hdgst:-false}, 00:17:26.094 "ddgst": ${ddgst:-false} 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 } 00:17:26.094 EOF 00:17:26.094 )") 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # cat 00:17:26.094 17:21:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.094 { 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme$subsystem", 00:17:26.094 "trtype": "$TEST_TRANSPORT", 00:17:26.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "$NVMF_PORT", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.094 "hdgst": ${hdgst:-false}, 00:17:26.094 "ddgst": ${ddgst:-false} 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 } 00:17:26.094 EOF 00:17:26.094 )") 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # cat 00:17:26.094 17:21:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.094 { 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme$subsystem", 00:17:26.094 "trtype": "$TEST_TRANSPORT", 00:17:26.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "$NVMF_PORT", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.094 "hdgst": ${hdgst:-false}, 00:17:26.094 "ddgst": ${ddgst:-false} 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 } 00:17:26.094 EOF 00:17:26.094 )") 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # cat 00:17:26.094 17:21:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.094 { 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme$subsystem", 00:17:26.094 "trtype": "$TEST_TRANSPORT", 00:17:26.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "$NVMF_PORT", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.094 "hdgst": ${hdgst:-false}, 00:17:26.094 "ddgst": ${ddgst:-false} 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 } 00:17:26.094 EOF 00:17:26.094 )") 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # cat 00:17:26.094 [2024-04-24 17:21:35.226718] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:17:26.094 17:21:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.094 [2024-04-24 17:21:35.226768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027869 ] 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.094 { 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme$subsystem", 00:17:26.094 "trtype": "$TEST_TRANSPORT", 00:17:26.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "$NVMF_PORT", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.094 "hdgst": ${hdgst:-false}, 00:17:26.094 "ddgst": ${ddgst:-false} 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 } 00:17:26.094 EOF 00:17:26.094 )") 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # cat 00:17:26.094 17:21:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.094 { 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme$subsystem", 00:17:26.094 "trtype": "$TEST_TRANSPORT", 00:17:26.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "$NVMF_PORT", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.094 "hdgst": ${hdgst:-false}, 00:17:26.094 "ddgst": ${ddgst:-false} 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 } 00:17:26.094 EOF 00:17:26.094 )") 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # cat 00:17:26.094 17:21:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.094 { 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme$subsystem", 00:17:26.094 "trtype": "$TEST_TRANSPORT", 00:17:26.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "$NVMF_PORT", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.094 "hdgst": ${hdgst:-false}, 00:17:26.094 "ddgst": ${ddgst:-false} 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 } 00:17:26.094 EOF 00:17:26.094 )") 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # cat 00:17:26.094 17:21:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.094 { 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme$subsystem", 00:17:26.094 "trtype": "$TEST_TRANSPORT", 00:17:26.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "$NVMF_PORT", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.094 "hdgst": ${hdgst:-false}, 00:17:26.094 "ddgst": ${ddgst:-false} 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 } 00:17:26.094 EOF 00:17:26.094 )") 00:17:26.094 17:21:35 -- nvmf/common.sh@543 -- # cat 00:17:26.094 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.094 17:21:35 -- nvmf/common.sh@545 -- # jq . 00:17:26.094 17:21:35 -- nvmf/common.sh@546 -- # IFS=, 00:17:26.094 17:21:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme1", 00:17:26.094 "trtype": "rdma", 00:17:26.094 "traddr": "192.168.100.8", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "4420", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:26.094 "hdgst": false, 00:17:26.094 "ddgst": false 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 },{ 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme2", 00:17:26.094 "trtype": "rdma", 00:17:26.094 "traddr": "192.168.100.8", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "4420", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:26.094 "hdgst": false, 00:17:26.094 "ddgst": false 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 },{ 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme3", 00:17:26.094 "trtype": "rdma", 00:17:26.094 "traddr": "192.168.100.8", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "4420", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:26.094 "hdgst": false, 00:17:26.094 "ddgst": false 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 },{ 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme4", 00:17:26.094 "trtype": "rdma", 00:17:26.094 "traddr": "192.168.100.8", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "4420", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:26.094 "hdgst": false, 00:17:26.094 "ddgst": false 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 },{ 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme5", 00:17:26.094 "trtype": "rdma", 00:17:26.094 "traddr": "192.168.100.8", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "4420", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:26.094 "hdgst": false, 00:17:26.094 "ddgst": false 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 },{ 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme6", 00:17:26.094 "trtype": "rdma", 00:17:26.094 "traddr": "192.168.100.8", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "4420", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:26.094 "hdgst": false, 00:17:26.094 "ddgst": false 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 },{ 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme7", 00:17:26.094 "trtype": "rdma", 00:17:26.094 "traddr": "192.168.100.8", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "4420", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:26.094 "hdgst": false, 00:17:26.094 "ddgst": false 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 },{ 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme8", 00:17:26.094 "trtype": "rdma", 00:17:26.094 "traddr": "192.168.100.8", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "4420", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:26.094 "hdgst": false, 00:17:26.094 "ddgst": false 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 },{ 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme9", 00:17:26.094 "trtype": "rdma", 00:17:26.094 "traddr": "192.168.100.8", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "4420", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:26.094 "hdgst": false, 00:17:26.094 "ddgst": false 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 },{ 00:17:26.094 "params": { 00:17:26.094 "name": "Nvme10", 00:17:26.094 "trtype": "rdma", 00:17:26.094 "traddr": "192.168.100.8", 00:17:26.094 "adrfam": "ipv4", 00:17:26.094 "trsvcid": "4420", 00:17:26.094 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:26.094 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:26.094 "hdgst": false, 00:17:26.094 "ddgst": false 00:17:26.094 }, 00:17:26.094 "method": "bdev_nvme_attach_controller" 00:17:26.094 }' 00:17:26.094 [2024-04-24 17:21:35.283224] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.352 [2024-04-24 17:21:35.354420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.286 Running I/O for 10 seconds... 00:17:27.286 17:21:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:27.286 17:21:36 -- common/autotest_common.sh@850 -- # return 0 00:17:27.286 17:21:36 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:27.286 17:21:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.286 17:21:36 -- common/autotest_common.sh@10 -- # set +x 00:17:27.286 17:21:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.286 17:21:36 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:27.286 17:21:36 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:27.286 17:21:36 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:17:27.286 17:21:36 -- target/shutdown.sh@57 -- # local ret=1 00:17:27.286 17:21:36 -- target/shutdown.sh@58 -- # local i 00:17:27.286 17:21:36 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:17:27.286 17:21:36 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:27.286 17:21:36 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:27.286 17:21:36 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:27.286 17:21:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.286 17:21:36 -- common/autotest_common.sh@10 -- # set +x 00:17:27.286 17:21:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.286 17:21:36 -- target/shutdown.sh@60 -- # read_io_count=3 00:17:27.287 17:21:36 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:17:27.287 17:21:36 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:27.544 17:21:36 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:27.544 17:21:36 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:27.544 17:21:36 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:27.544 17:21:36 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:27.544 17:21:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.544 17:21:36 -- common/autotest_common.sh@10 -- # set +x 00:17:27.802 17:21:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.802 17:21:36 -- target/shutdown.sh@60 -- # read_io_count=151 00:17:27.802 17:21:36 -- target/shutdown.sh@63 -- # '[' 151 -ge 100 ']' 00:17:27.802 17:21:36 -- target/shutdown.sh@64 -- # ret=0 00:17:27.802 17:21:36 -- target/shutdown.sh@65 -- # break 00:17:27.802 17:21:36 -- target/shutdown.sh@69 -- # return 0 00:17:27.802 17:21:36 -- target/shutdown.sh@110 -- # killprocess 3027869 00:17:27.802 17:21:36 -- common/autotest_common.sh@936 -- # '[' -z 3027869 ']' 00:17:27.802 17:21:36 -- common/autotest_common.sh@940 -- # kill -0 3027869 00:17:27.802 17:21:36 -- common/autotest_common.sh@941 -- # uname 00:17:27.802 17:21:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:27.802 17:21:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3027869 00:17:27.802 17:21:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:27.802 17:21:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:27.802 17:21:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3027869' 00:17:27.802 killing process with pid 3027869 00:17:27.802 17:21:36 -- common/autotest_common.sh@955 -- # kill 3027869 00:17:27.802 17:21:36 -- common/autotest_common.sh@960 -- # wait 3027869 00:17:28.059 Received shutdown signal, test time was about 0.826638 seconds 00:17:28.059 00:17:28.059 Latency(us) 00:17:28.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.059 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:28.059 Verification LBA range: start 0x0 length 0x400 00:17:28.059 Nvme1n1 : 0.81 340.21 21.26 0.00 0.00 183661.70 7365.00 207717.91 00:17:28.059 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:28.059 Verification LBA range: start 0x0 length 0x400 00:17:28.059 Nvme2n1 : 0.81 334.89 20.93 0.00 0.00 182521.66 7365.00 191739.61 00:17:28.059 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:28.059 Verification LBA range: start 0x0 length 0x400 00:17:28.059 Nvme3n1 : 0.81 393.42 24.59 0.00 0.00 152703.80 5305.30 143804.71 00:17:28.059 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:28.059 Verification LBA range: start 0x0 length 0x400 00:17:28.059 Nvme4n1 : 0.81 392.84 24.55 0.00 0.00 149855.82 7926.74 136814.20 00:17:28.059 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:28.059 Verification LBA range: start 0x0 length 0x400 00:17:28.059 Nvme5n1 : 0.82 392.13 24.51 0.00 0.00 147545.53 8550.89 127327.09 00:17:28.059 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:28.059 Verification LBA range: start 0x0 length 0x400 00:17:28.059 Nvme6n1 : 0.82 391.43 24.46 0.00 0.00 144695.20 9175.04 118339.29 00:17:28.059 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:28.059 Verification LBA range: start 0x0 length 0x400 00:17:28.059 Nvme7n1 : 0.82 390.77 24.42 0.00 0.00 141764.12 9799.19 110849.46 00:17:28.059 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:28.059 Verification LBA range: start 0x0 length 0x400 00:17:28.059 Nvme8n1 : 0.82 389.98 24.37 0.00 0.00 139480.11 10673.01 100363.70 00:17:28.059 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:28.059 Verification LBA range: start 0x0 length 0x400 00:17:28.059 Nvme9n1 : 0.82 389.18 24.32 0.00 0.00 136779.68 11609.23 97367.77 00:17:28.059 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:28.059 Verification LBA range: start 0x0 length 0x400 00:17:28.059 Nvme10n1 : 0.83 309.93 19.37 0.00 0.00 167389.47 3027.14 209715.20 00:17:28.059 =================================================================================================================== 00:17:28.059 Total : 3724.78 232.80 0.00 0.00 153512.48 3027.14 209715.20 00:17:28.317 17:21:37 -- target/shutdown.sh@113 -- # sleep 1 00:17:29.249 17:21:38 -- target/shutdown.sh@114 -- # kill -0 3027800 00:17:29.249 17:21:38 -- target/shutdown.sh@116 -- # stoptarget 00:17:29.249 17:21:38 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:29.249 17:21:38 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:29.249 17:21:38 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:29.249 17:21:38 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:29.249 17:21:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:29.249 17:21:38 -- nvmf/common.sh@117 -- # sync 00:17:29.249 17:21:38 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:29.249 17:21:38 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:29.249 17:21:38 -- nvmf/common.sh@120 -- # set +e 00:17:29.249 17:21:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:29.249 17:21:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:29.249 rmmod nvme_rdma 00:17:29.249 rmmod nvme_fabrics 00:17:29.249 17:21:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:29.249 17:21:38 -- nvmf/common.sh@124 -- # set -e 00:17:29.249 17:21:38 -- nvmf/common.sh@125 -- # return 0 00:17:29.249 17:21:38 -- nvmf/common.sh@478 -- # '[' -n 3027800 ']' 00:17:29.249 17:21:38 -- nvmf/common.sh@479 -- # killprocess 3027800 00:17:29.249 17:21:38 -- common/autotest_common.sh@936 -- # '[' -z 3027800 ']' 00:17:29.249 17:21:38 -- common/autotest_common.sh@940 -- # kill -0 3027800 00:17:29.249 17:21:38 -- common/autotest_common.sh@941 -- # uname 00:17:29.249 17:21:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:29.249 17:21:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3027800 00:17:29.249 17:21:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:29.249 17:21:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:29.249 17:21:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3027800' 00:17:29.249 killing process with pid 3027800 00:17:29.249 17:21:38 -- common/autotest_common.sh@955 -- # kill 3027800 00:17:29.249 17:21:38 -- common/autotest_common.sh@960 -- # wait 3027800 00:17:29.815 17:21:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:29.815 17:21:38 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:29.815 00:17:29.815 real 0m5.532s 00:17:29.815 user 0m22.371s 00:17:29.815 sys 0m1.003s 00:17:29.815 17:21:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:29.815 17:21:38 -- common/autotest_common.sh@10 -- # set +x 00:17:29.815 ************************************ 00:17:29.815 END TEST nvmf_shutdown_tc2 00:17:29.815 ************************************ 00:17:29.815 17:21:38 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:17:29.815 17:21:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:29.815 17:21:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:29.815 17:21:38 -- common/autotest_common.sh@10 -- # set +x 00:17:30.074 ************************************ 00:17:30.074 START TEST nvmf_shutdown_tc3 00:17:30.074 ************************************ 00:17:30.074 17:21:39 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:17:30.074 17:21:39 -- target/shutdown.sh@121 -- # starttarget 00:17:30.074 17:21:39 -- target/shutdown.sh@15 -- # nvmftestinit 00:17:30.074 17:21:39 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:30.074 17:21:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.074 17:21:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:30.074 17:21:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:30.074 17:21:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:30.074 17:21:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.074 17:21:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.074 17:21:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.074 17:21:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:30.074 17:21:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:30.074 17:21:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:30.074 17:21:39 -- common/autotest_common.sh@10 -- # set +x 00:17:30.074 17:21:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:30.074 17:21:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:30.074 17:21:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:30.074 17:21:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:30.074 17:21:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:30.074 17:21:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:30.074 17:21:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:30.074 17:21:39 -- nvmf/common.sh@295 -- # net_devs=() 00:17:30.074 17:21:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:30.074 17:21:39 -- nvmf/common.sh@296 -- # e810=() 00:17:30.074 17:21:39 -- nvmf/common.sh@296 -- # local -ga e810 00:17:30.074 17:21:39 -- nvmf/common.sh@297 -- # x722=() 00:17:30.074 17:21:39 -- nvmf/common.sh@297 -- # local -ga x722 00:17:30.074 17:21:39 -- nvmf/common.sh@298 -- # mlx=() 00:17:30.074 17:21:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:30.074 17:21:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.074 17:21:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.074 17:21:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.074 17:21:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.074 17:21:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.074 17:21:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.074 17:21:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.074 17:21:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.074 17:21:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.074 17:21:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.074 17:21:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.074 17:21:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:30.074 17:21:39 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:30.074 17:21:39 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:30.074 17:21:39 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:30.074 17:21:39 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:30.074 17:21:39 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:30.074 17:21:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:30.074 17:21:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.074 17:21:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:17:30.074 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:17:30.074 17:21:39 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:30.074 17:21:39 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:30.074 17:21:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:30.074 17:21:39 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:30.074 17:21:39 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:30.074 17:21:39 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:30.074 17:21:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.074 17:21:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:17:30.074 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:17:30.074 17:21:39 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:30.074 17:21:39 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:30.074 17:21:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:30.074 17:21:39 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:30.074 17:21:39 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:30.074 17:21:39 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:30.074 17:21:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:30.074 17:21:39 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:30.074 17:21:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.074 17:21:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.074 17:21:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:30.074 17:21:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.074 17:21:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:17:30.074 Found net devices under 0000:da:00.0: mlx_0_0 00:17:30.074 17:21:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.075 17:21:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.075 17:21:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.075 17:21:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:30.075 17:21:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.075 17:21:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:17:30.075 Found net devices under 0000:da:00.1: mlx_0_1 00:17:30.075 17:21:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.075 17:21:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:30.075 17:21:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:30.075 17:21:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:30.075 17:21:39 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:30.075 17:21:39 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:30.075 17:21:39 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:30.075 17:21:39 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:30.075 17:21:39 -- nvmf/common.sh@58 -- # uname 00:17:30.075 17:21:39 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:30.075 17:21:39 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:30.075 17:21:39 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:30.075 17:21:39 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:30.075 17:21:39 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:30.075 17:21:39 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:30.075 17:21:39 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:30.075 17:21:39 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:30.075 17:21:39 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:30.075 17:21:39 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:30.075 17:21:39 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:30.075 17:21:39 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:30.075 17:21:39 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:30.075 17:21:39 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:30.075 17:21:39 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:30.075 17:21:39 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:30.075 17:21:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:30.075 17:21:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.075 17:21:39 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:30.075 17:21:39 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:30.075 17:21:39 -- nvmf/common.sh@105 -- # continue 2 00:17:30.075 17:21:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:30.075 17:21:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.075 17:21:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:30.075 17:21:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.075 17:21:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:30.075 17:21:39 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:30.075 17:21:39 -- nvmf/common.sh@105 -- # continue 2 00:17:30.075 17:21:39 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:30.075 17:21:39 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:30.075 17:21:39 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:30.075 17:21:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:30.075 17:21:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:30.075 17:21:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:30.075 17:21:39 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:30.075 17:21:39 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:30.075 17:21:39 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:30.075 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:30.075 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:17:30.075 altname enp218s0f0np0 00:17:30.075 altname ens818f0np0 00:17:30.075 inet 192.168.100.8/24 scope global mlx_0_0 00:17:30.075 valid_lft forever preferred_lft forever 00:17:30.075 17:21:39 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:30.075 17:21:39 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:30.075 17:21:39 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:30.075 17:21:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:30.075 17:21:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:30.075 17:21:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:30.075 17:21:39 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:30.075 17:21:39 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:30.075 17:21:39 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:30.075 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:30.075 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:17:30.075 altname enp218s0f1np1 00:17:30.075 altname ens818f1np1 00:17:30.075 inet 192.168.100.9/24 scope global mlx_0_1 00:17:30.075 valid_lft forever preferred_lft forever 00:17:30.075 17:21:39 -- nvmf/common.sh@411 -- # return 0 00:17:30.075 17:21:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:30.075 17:21:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:30.075 17:21:39 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:30.075 17:21:39 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:30.075 17:21:39 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:30.075 17:21:39 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:30.075 17:21:39 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:30.075 17:21:39 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:30.075 17:21:39 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:30.075 17:21:39 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:30.075 17:21:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:30.075 17:21:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.075 17:21:39 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:30.075 17:21:39 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:30.075 17:21:39 -- nvmf/common.sh@105 -- # continue 2 00:17:30.075 17:21:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:30.075 17:21:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.075 17:21:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:30.075 17:21:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.075 17:21:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:30.075 17:21:39 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:30.075 17:21:39 -- nvmf/common.sh@105 -- # continue 2 00:17:30.075 17:21:39 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:30.075 17:21:39 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:30.075 17:21:39 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:30.075 17:21:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:30.075 17:21:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:30.075 17:21:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:30.075 17:21:39 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:30.075 17:21:39 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:30.075 17:21:39 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:30.075 17:21:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:30.075 17:21:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:30.075 17:21:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:30.075 17:21:39 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:30.075 192.168.100.9' 00:17:30.075 17:21:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:30.075 192.168.100.9' 00:17:30.075 17:21:39 -- nvmf/common.sh@446 -- # head -n 1 00:17:30.075 17:21:39 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:30.075 17:21:39 -- nvmf/common.sh@447 -- # tail -n +2 00:17:30.075 17:21:39 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:30.075 192.168.100.9' 00:17:30.075 17:21:39 -- nvmf/common.sh@447 -- # head -n 1 00:17:30.075 17:21:39 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:30.075 17:21:39 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:30.075 17:21:39 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:30.075 17:21:39 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:30.075 17:21:39 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:30.075 17:21:39 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:30.075 17:21:39 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:17:30.075 17:21:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:30.075 17:21:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:30.075 17:21:39 -- common/autotest_common.sh@10 -- # set +x 00:17:30.075 17:21:39 -- nvmf/common.sh@470 -- # nvmfpid=3028044 00:17:30.075 17:21:39 -- nvmf/common.sh@471 -- # waitforlisten 3028044 00:17:30.075 17:21:39 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:30.075 17:21:39 -- common/autotest_common.sh@817 -- # '[' -z 3028044 ']' 00:17:30.075 17:21:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.075 17:21:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:30.075 17:21:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.075 17:21:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:30.075 17:21:39 -- common/autotest_common.sh@10 -- # set +x 00:17:30.334 [2024-04-24 17:21:39.330510] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:17:30.334 [2024-04-24 17:21:39.330558] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.334 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.334 [2024-04-24 17:21:39.387001] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:30.334 [2024-04-24 17:21:39.458706] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.334 [2024-04-24 17:21:39.458747] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.334 [2024-04-24 17:21:39.458753] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.334 [2024-04-24 17:21:39.458759] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.334 [2024-04-24 17:21:39.458764] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.334 [2024-04-24 17:21:39.458870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.334 [2024-04-24 17:21:39.458958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:30.334 [2024-04-24 17:21:39.459044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.334 [2024-04-24 17:21:39.459045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:30.962 17:21:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:30.962 17:21:40 -- common/autotest_common.sh@850 -- # return 0 00:17:30.962 17:21:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:30.962 17:21:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:30.962 17:21:40 -- common/autotest_common.sh@10 -- # set +x 00:17:30.962 17:21:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.962 17:21:40 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:30.962 17:21:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:30.962 17:21:40 -- common/autotest_common.sh@10 -- # set +x 00:17:31.261 [2024-04-24 17:21:40.198165] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20c8250/0x20cc740) succeed. 00:17:31.261 [2024-04-24 17:21:40.208573] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20c9840/0x210ddd0) succeed. 00:17:31.261 17:21:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.261 17:21:40 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:31.261 17:21:40 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:31.261 17:21:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:31.261 17:21:40 -- common/autotest_common.sh@10 -- # set +x 00:17:31.261 17:21:40 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:31.261 17:21:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:31.261 17:21:40 -- target/shutdown.sh@28 -- # cat 00:17:31.261 17:21:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:31.261 17:21:40 -- target/shutdown.sh@28 -- # cat 00:17:31.261 17:21:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:31.261 17:21:40 -- target/shutdown.sh@28 -- # cat 00:17:31.261 17:21:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:31.261 17:21:40 -- target/shutdown.sh@28 -- # cat 00:17:31.261 17:21:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:31.261 17:21:40 -- target/shutdown.sh@28 -- # cat 00:17:31.261 17:21:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:31.261 17:21:40 -- target/shutdown.sh@28 -- # cat 00:17:31.261 17:21:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:31.261 17:21:40 -- target/shutdown.sh@28 -- # cat 00:17:31.261 17:21:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:31.261 17:21:40 -- target/shutdown.sh@28 -- # cat 00:17:31.261 17:21:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:31.261 17:21:40 -- target/shutdown.sh@28 -- # cat 00:17:31.261 17:21:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:31.261 17:21:40 -- target/shutdown.sh@28 -- # cat 00:17:31.261 17:21:40 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:31.261 17:21:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.261 17:21:40 -- common/autotest_common.sh@10 -- # set +x 00:17:31.261 Malloc1 00:17:31.261 [2024-04-24 17:21:40.416675] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:31.261 Malloc2 00:17:31.261 Malloc3 00:17:31.522 Malloc4 00:17:31.522 Malloc5 00:17:31.522 Malloc6 00:17:31.522 Malloc7 00:17:31.522 Malloc8 00:17:31.522 Malloc9 00:17:31.780 Malloc10 00:17:31.780 17:21:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.780 17:21:40 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:31.780 17:21:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:31.780 17:21:40 -- common/autotest_common.sh@10 -- # set +x 00:17:31.780 17:21:40 -- target/shutdown.sh@125 -- # perfpid=3028125 00:17:31.780 17:21:40 -- target/shutdown.sh@126 -- # waitforlisten 3028125 /var/tmp/bdevperf.sock 00:17:31.780 17:21:40 -- common/autotest_common.sh@817 -- # '[' -z 3028125 ']' 00:17:31.780 17:21:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.780 17:21:40 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:31.780 17:21:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:31.780 17:21:40 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:31.780 17:21:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.780 17:21:40 -- nvmf/common.sh@521 -- # config=() 00:17:31.780 17:21:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:31.780 17:21:40 -- nvmf/common.sh@521 -- # local subsystem config 00:17:31.780 17:21:40 -- common/autotest_common.sh@10 -- # set +x 00:17:31.780 17:21:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:31.780 17:21:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:31.780 { 00:17:31.780 "params": { 00:17:31.780 "name": "Nvme$subsystem", 00:17:31.780 "trtype": "$TEST_TRANSPORT", 00:17:31.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.780 "adrfam": "ipv4", 00:17:31.780 "trsvcid": "$NVMF_PORT", 00:17:31.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.780 "hdgst": ${hdgst:-false}, 00:17:31.780 "ddgst": ${ddgst:-false} 00:17:31.780 }, 00:17:31.780 "method": "bdev_nvme_attach_controller" 00:17:31.780 } 00:17:31.780 EOF 00:17:31.780 )") 00:17:31.780 17:21:40 -- nvmf/common.sh@543 -- # cat 00:17:31.780 17:21:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:31.780 17:21:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:31.780 { 00:17:31.780 "params": { 00:17:31.780 "name": "Nvme$subsystem", 00:17:31.780 "trtype": "$TEST_TRANSPORT", 00:17:31.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.780 "adrfam": "ipv4", 00:17:31.780 "trsvcid": "$NVMF_PORT", 00:17:31.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.780 "hdgst": ${hdgst:-false}, 00:17:31.780 "ddgst": ${ddgst:-false} 00:17:31.780 }, 00:17:31.780 "method": "bdev_nvme_attach_controller" 00:17:31.780 } 00:17:31.780 EOF 00:17:31.780 )") 00:17:31.780 17:21:40 -- nvmf/common.sh@543 -- # cat 00:17:31.780 17:21:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:31.780 17:21:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:31.780 { 00:17:31.780 "params": { 00:17:31.780 "name": "Nvme$subsystem", 00:17:31.780 "trtype": "$TEST_TRANSPORT", 00:17:31.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.780 "adrfam": "ipv4", 00:17:31.780 "trsvcid": "$NVMF_PORT", 00:17:31.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.780 "hdgst": ${hdgst:-false}, 00:17:31.780 "ddgst": ${ddgst:-false} 00:17:31.780 }, 00:17:31.780 "method": "bdev_nvme_attach_controller" 00:17:31.780 } 00:17:31.780 EOF 00:17:31.780 )") 00:17:31.780 17:21:40 -- nvmf/common.sh@543 -- # cat 00:17:31.780 17:21:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:31.780 17:21:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:31.780 { 00:17:31.780 "params": { 00:17:31.780 "name": "Nvme$subsystem", 00:17:31.780 "trtype": "$TEST_TRANSPORT", 00:17:31.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.780 "adrfam": "ipv4", 00:17:31.780 "trsvcid": "$NVMF_PORT", 00:17:31.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.780 "hdgst": ${hdgst:-false}, 00:17:31.780 "ddgst": ${ddgst:-false} 00:17:31.780 }, 00:17:31.780 "method": "bdev_nvme_attach_controller" 00:17:31.780 } 00:17:31.780 EOF 00:17:31.780 )") 00:17:31.780 17:21:40 -- nvmf/common.sh@543 -- # cat 00:17:31.780 17:21:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:31.780 17:21:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:31.780 { 00:17:31.780 "params": { 00:17:31.780 "name": "Nvme$subsystem", 00:17:31.780 "trtype": "$TEST_TRANSPORT", 00:17:31.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.780 "adrfam": "ipv4", 00:17:31.780 "trsvcid": "$NVMF_PORT", 00:17:31.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.780 "hdgst": ${hdgst:-false}, 00:17:31.780 "ddgst": ${ddgst:-false} 00:17:31.780 }, 00:17:31.780 "method": "bdev_nvme_attach_controller" 00:17:31.780 } 00:17:31.780 EOF 00:17:31.780 )") 00:17:31.780 17:21:40 -- nvmf/common.sh@543 -- # cat 00:17:31.780 17:21:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:31.780 17:21:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:31.780 { 00:17:31.780 "params": { 00:17:31.780 "name": "Nvme$subsystem", 00:17:31.780 "trtype": "$TEST_TRANSPORT", 00:17:31.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.780 "adrfam": "ipv4", 00:17:31.780 "trsvcid": "$NVMF_PORT", 00:17:31.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.780 "hdgst": ${hdgst:-false}, 00:17:31.780 "ddgst": ${ddgst:-false} 00:17:31.780 }, 00:17:31.780 "method": "bdev_nvme_attach_controller" 00:17:31.780 } 00:17:31.780 EOF 00:17:31.780 )") 00:17:31.780 17:21:40 -- nvmf/common.sh@543 -- # cat 00:17:31.780 [2024-04-24 17:21:40.887623] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:17:31.780 [2024-04-24 17:21:40.887670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028125 ] 00:17:31.780 17:21:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:31.780 17:21:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:31.780 { 00:17:31.780 "params": { 00:17:31.781 "name": "Nvme$subsystem", 00:17:31.781 "trtype": "$TEST_TRANSPORT", 00:17:31.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.781 "adrfam": "ipv4", 00:17:31.781 "trsvcid": "$NVMF_PORT", 00:17:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.781 "hdgst": ${hdgst:-false}, 00:17:31.781 "ddgst": ${ddgst:-false} 00:17:31.781 }, 00:17:31.781 "method": "bdev_nvme_attach_controller" 00:17:31.781 } 00:17:31.781 EOF 00:17:31.781 )") 00:17:31.781 17:21:40 -- nvmf/common.sh@543 -- # cat 00:17:31.781 17:21:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:31.781 17:21:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:31.781 { 00:17:31.781 "params": { 00:17:31.781 "name": "Nvme$subsystem", 00:17:31.781 "trtype": "$TEST_TRANSPORT", 00:17:31.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.781 "adrfam": "ipv4", 00:17:31.781 "trsvcid": "$NVMF_PORT", 00:17:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.781 "hdgst": ${hdgst:-false}, 00:17:31.781 "ddgst": ${ddgst:-false} 00:17:31.781 }, 00:17:31.781 "method": "bdev_nvme_attach_controller" 00:17:31.781 } 00:17:31.781 EOF 00:17:31.781 )") 00:17:31.781 17:21:40 -- nvmf/common.sh@543 -- # cat 00:17:31.781 17:21:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:31.781 17:21:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:31.781 { 00:17:31.781 "params": { 00:17:31.781 "name": "Nvme$subsystem", 00:17:31.781 "trtype": "$TEST_TRANSPORT", 00:17:31.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.781 "adrfam": "ipv4", 00:17:31.781 "trsvcid": "$NVMF_PORT", 00:17:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.781 "hdgst": ${hdgst:-false}, 00:17:31.781 "ddgst": ${ddgst:-false} 00:17:31.781 }, 00:17:31.781 "method": "bdev_nvme_attach_controller" 00:17:31.781 } 00:17:31.781 EOF 00:17:31.781 )") 00:17:31.781 17:21:40 -- nvmf/common.sh@543 -- # cat 00:17:31.781 17:21:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:31.781 17:21:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:31.781 { 00:17:31.781 "params": { 00:17:31.781 "name": "Nvme$subsystem", 00:17:31.781 "trtype": "$TEST_TRANSPORT", 00:17:31.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.781 "adrfam": "ipv4", 00:17:31.781 "trsvcid": "$NVMF_PORT", 00:17:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.781 "hdgst": ${hdgst:-false}, 00:17:31.781 "ddgst": ${ddgst:-false} 00:17:31.781 }, 00:17:31.781 "method": "bdev_nvme_attach_controller" 00:17:31.781 } 00:17:31.781 EOF 00:17:31.781 )") 00:17:31.781 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.781 17:21:40 -- nvmf/common.sh@543 -- # cat 00:17:31.781 17:21:40 -- nvmf/common.sh@545 -- # jq . 00:17:31.781 17:21:40 -- nvmf/common.sh@546 -- # IFS=, 00:17:31.781 17:21:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:31.781 "params": { 00:17:31.781 "name": "Nvme1", 00:17:31.781 "trtype": "rdma", 00:17:31.781 "traddr": "192.168.100.8", 00:17:31.781 "adrfam": "ipv4", 00:17:31.781 "trsvcid": "4420", 00:17:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.781 "hdgst": false, 00:17:31.781 "ddgst": false 00:17:31.781 }, 00:17:31.781 "method": "bdev_nvme_attach_controller" 00:17:31.781 },{ 00:17:31.781 "params": { 00:17:31.781 "name": "Nvme2", 00:17:31.781 "trtype": "rdma", 00:17:31.781 "traddr": "192.168.100.8", 00:17:31.781 "adrfam": "ipv4", 00:17:31.781 "trsvcid": "4420", 00:17:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:31.781 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:31.781 "hdgst": false, 00:17:31.781 "ddgst": false 00:17:31.781 }, 00:17:31.781 "method": "bdev_nvme_attach_controller" 00:17:31.781 },{ 00:17:31.781 "params": { 00:17:31.781 "name": "Nvme3", 00:17:31.781 "trtype": "rdma", 00:17:31.781 "traddr": "192.168.100.8", 00:17:31.781 "adrfam": "ipv4", 00:17:31.781 "trsvcid": "4420", 00:17:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:31.781 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:31.781 "hdgst": false, 00:17:31.781 "ddgst": false 00:17:31.781 }, 00:17:31.781 "method": "bdev_nvme_attach_controller" 00:17:31.781 },{ 00:17:31.781 "params": { 00:17:31.781 "name": "Nvme4", 00:17:31.781 "trtype": "rdma", 00:17:31.781 "traddr": "192.168.100.8", 00:17:31.781 "adrfam": "ipv4", 00:17:31.781 "trsvcid": "4420", 00:17:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:31.781 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:31.781 "hdgst": false, 00:17:31.781 "ddgst": false 00:17:31.781 }, 00:17:31.781 "method": "bdev_nvme_attach_controller" 00:17:31.781 },{ 00:17:31.781 "params": { 00:17:31.781 "name": "Nvme5", 00:17:31.781 "trtype": "rdma", 00:17:31.781 "traddr": "192.168.100.8", 00:17:31.781 "adrfam": "ipv4", 00:17:31.781 "trsvcid": "4420", 00:17:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:31.781 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:31.781 "hdgst": false, 00:17:31.781 "ddgst": false 00:17:31.781 }, 00:17:31.781 "method": "bdev_nvme_attach_controller" 00:17:31.781 },{ 00:17:31.781 "params": { 00:17:31.781 "name": "Nvme6", 00:17:31.781 "trtype": "rdma", 00:17:31.781 "traddr": "192.168.100.8", 00:17:31.781 "adrfam": "ipv4", 00:17:31.781 "trsvcid": "4420", 00:17:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:31.781 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:31.781 "hdgst": false, 00:17:31.781 "ddgst": false 00:17:31.781 }, 00:17:31.781 "method": "bdev_nvme_attach_controller" 00:17:31.781 },{ 00:17:31.781 "params": { 00:17:31.781 "name": "Nvme7", 00:17:31.781 "trtype": "rdma", 00:17:31.781 "traddr": "192.168.100.8", 00:17:31.781 "adrfam": "ipv4", 00:17:31.781 "trsvcid": "4420", 00:17:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:31.781 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:31.781 "hdgst": false, 00:17:31.781 "ddgst": false 00:17:31.781 }, 00:17:31.781 "method": "bdev_nvme_attach_controller" 00:17:31.781 },{ 00:17:31.781 "params": { 00:17:31.781 "name": "Nvme8", 00:17:31.781 "trtype": "rdma", 00:17:31.781 "traddr": "192.168.100.8", 00:17:31.781 "adrfam": "ipv4", 00:17:31.781 "trsvcid": "4420", 00:17:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:31.781 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:31.781 "hdgst": false, 00:17:31.781 "ddgst": false 00:17:31.781 }, 00:17:31.781 "method": "bdev_nvme_attach_controller" 00:17:31.781 },{ 00:17:31.781 "params": { 00:17:31.781 "name": "Nvme9", 00:17:31.781 "trtype": "rdma", 00:17:31.781 "traddr": "192.168.100.8", 00:17:31.781 "adrfam": "ipv4", 00:17:31.781 "trsvcid": "4420", 00:17:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:31.781 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:31.781 "hdgst": false, 00:17:31.781 "ddgst": false 00:17:31.781 }, 00:17:31.781 "method": "bdev_nvme_attach_controller" 00:17:31.781 },{ 00:17:31.781 "params": { 00:17:31.781 "name": "Nvme10", 00:17:31.781 "trtype": "rdma", 00:17:31.781 "traddr": "192.168.100.8", 00:17:31.781 "adrfam": "ipv4", 00:17:31.781 "trsvcid": "4420", 00:17:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:31.781 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:31.781 "hdgst": false, 00:17:31.781 "ddgst": false 00:17:31.781 }, 00:17:31.781 "method": "bdev_nvme_attach_controller" 00:17:31.781 }' 00:17:31.781 [2024-04-24 17:21:40.945893] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.781 [2024-04-24 17:21:41.017369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.734 Running I/O for 10 seconds... 00:17:32.734 17:21:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:32.734 17:21:41 -- common/autotest_common.sh@850 -- # return 0 00:17:32.734 17:21:41 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:32.734 17:21:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:32.734 17:21:41 -- common/autotest_common.sh@10 -- # set +x 00:17:32.992 17:21:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:32.992 17:21:42 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.992 17:21:42 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:32.992 17:21:42 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:32.992 17:21:42 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:17:32.992 17:21:42 -- target/shutdown.sh@57 -- # local ret=1 00:17:32.992 17:21:42 -- target/shutdown.sh@58 -- # local i 00:17:32.992 17:21:42 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:17:32.992 17:21:42 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:32.992 17:21:42 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:32.992 17:21:42 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:32.992 17:21:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:32.992 17:21:42 -- common/autotest_common.sh@10 -- # set +x 00:17:32.992 17:21:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:32.992 17:21:42 -- target/shutdown.sh@60 -- # read_io_count=3 00:17:32.992 17:21:42 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:17:32.992 17:21:42 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:33.249 17:21:42 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:33.249 17:21:42 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:33.249 17:21:42 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:33.249 17:21:42 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:33.249 17:21:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.249 17:21:42 -- common/autotest_common.sh@10 -- # set +x 00:17:33.507 17:21:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.507 17:21:42 -- target/shutdown.sh@60 -- # read_io_count=155 00:17:33.507 17:21:42 -- target/shutdown.sh@63 -- # '[' 155 -ge 100 ']' 00:17:33.507 17:21:42 -- target/shutdown.sh@64 -- # ret=0 00:17:33.507 17:21:42 -- target/shutdown.sh@65 -- # break 00:17:33.507 17:21:42 -- target/shutdown.sh@69 -- # return 0 00:17:33.507 17:21:42 -- target/shutdown.sh@135 -- # killprocess 3028044 00:17:33.507 17:21:42 -- common/autotest_common.sh@936 -- # '[' -z 3028044 ']' 00:17:33.507 17:21:42 -- common/autotest_common.sh@940 -- # kill -0 3028044 00:17:33.507 17:21:42 -- common/autotest_common.sh@941 -- # uname 00:17:33.507 17:21:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:33.507 17:21:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3028044 00:17:33.507 17:21:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:33.507 17:21:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:33.507 17:21:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3028044' 00:17:33.507 killing process with pid 3028044 00:17:33.507 17:21:42 -- common/autotest_common.sh@955 -- # kill 3028044 00:17:33.507 17:21:42 -- common/autotest_common.sh@960 -- # wait 3028044 00:17:34.073 17:21:43 -- target/shutdown.sh@136 -- # nvmfpid= 00:17:34.073 17:21:43 -- target/shutdown.sh@139 -- # sleep 1 00:17:34.642 [2024-04-24 17:21:43.691488] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257b00 was disconnected and freed. reset controller. 00:17:34.642 [2024-04-24 17:21:43.693949] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192578c0 was disconnected and freed. reset controller. 00:17:34.642 [2024-04-24 17:21:43.696143] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257680 was disconnected and freed. reset controller. 00:17:34.642 [2024-04-24 17:21:43.698377] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257440 was disconnected and freed. reset controller. 00:17:34.642 [2024-04-24 17:21:43.700896] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257200 was disconnected and freed. reset controller. 00:17:34.642 [2024-04-24 17:21:43.703376] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256fc0 was disconnected and freed. reset controller. 00:17:34.642 [2024-04-24 17:21:43.705863] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256d80 was disconnected and freed. reset controller. 00:17:34.642 [2024-04-24 17:21:43.708422] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256b40 was disconnected and freed. reset controller. 00:17:34.642 [2024-04-24 17:21:43.708512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0dfd80 len:0x10000 key:0x182d00 00:17:34.642 [2024-04-24 17:21:43.708543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.642 [2024-04-24 17:21:43.708580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cfd00 len:0x10000 key:0x182d00 00:17:34.642 [2024-04-24 17:21:43.708603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.642 [2024-04-24 17:21:43.708640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bfc80 len:0x10000 key:0x182d00 00:17:34.642 [2024-04-24 17:21:43.708663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.642 [2024-04-24 17:21:43.708690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0afc00 len:0x10000 key:0x182d00 00:17:34.642 [2024-04-24 17:21:43.708711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.642 [2024-04-24 17:21:43.708738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09fb80 len:0x10000 key:0x182d00 00:17:34.642 [2024-04-24 17:21:43.708759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.642 [2024-04-24 17:21:43.708786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08fb00 len:0x10000 key:0x182d00 00:17:34.642 [2024-04-24 17:21:43.708808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.642 [2024-04-24 17:21:43.708846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07fa80 len:0x10000 key:0x182d00 00:17:34.642 [2024-04-24 17:21:43.708871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.642 [2024-04-24 17:21:43.708884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06fa00 len:0x10000 key:0x182d00 00:17:34.642 [2024-04-24 17:21:43.708895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.642 [2024-04-24 17:21:43.708907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f980 len:0x10000 key:0x182d00 00:17:34.642 [2024-04-24 17:21:43.708918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.642 [2024-04-24 17:21:43.708931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f900 len:0x10000 key:0x182d00 00:17:34.642 [2024-04-24 17:21:43.708941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.642 [2024-04-24 17:21:43.708954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f880 len:0x10000 key:0x182d00 00:17:34.642 [2024-04-24 17:21:43.708964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.642 [2024-04-24 17:21:43.708977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f800 len:0x10000 key:0x182d00 00:17:34.642 [2024-04-24 17:21:43.708987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.642 [2024-04-24 17:21:43.708999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f780 len:0x10000 key:0x182d00 00:17:34.642 [2024-04-24 17:21:43.709010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.642 [2024-04-24 17:21:43.709025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f700 len:0x10000 key:0x182d00 00:17:34.642 [2024-04-24 17:21:43.709035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aedf780 len:0x10000 key:0x183200 00:17:34.643 [2024-04-24 17:21:43.709058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aecf700 len:0x10000 key:0x183200 00:17:34.643 [2024-04-24 17:21:43.709081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aebf680 len:0x10000 key:0x183200 00:17:34.643 [2024-04-24 17:21:43.709103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aeaf600 len:0x10000 key:0x183200 00:17:34.643 [2024-04-24 17:21:43.709126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae9f580 len:0x10000 key:0x183200 00:17:34.643 [2024-04-24 17:21:43.709150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae8f500 len:0x10000 key:0x183200 00:17:34.643 [2024-04-24 17:21:43.709172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae7f480 len:0x10000 key:0x183200 00:17:34.643 [2024-04-24 17:21:43.709196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae6f400 len:0x10000 key:0x183200 00:17:34.643 [2024-04-24 17:21:43.709219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae5f380 len:0x10000 key:0x183200 00:17:34.643 [2024-04-24 17:21:43.709242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae4f300 len:0x10000 key:0x183200 00:17:34.643 [2024-04-24 17:21:43.709265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae3f280 len:0x10000 key:0x183200 00:17:34.643 [2024-04-24 17:21:43.709291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae2f200 len:0x10000 key:0x183200 00:17:34.643 [2024-04-24 17:21:43.709314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae1f180 len:0x10000 key:0x183200 00:17:34.643 [2024-04-24 17:21:43.709337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f100 len:0x10000 key:0x183200 00:17:34.643 [2024-04-24 17:21:43.709360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.643 [2024-04-24 17:21:43.709867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183c00 00:17:34.643 [2024-04-24 17:21:43.709877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.709890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183c00 00:17:34.644 [2024-04-24 17:21:43.709900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.709913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183c00 00:17:34.644 [2024-04-24 17:21:43.709924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.709936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183c00 00:17:34.644 [2024-04-24 17:21:43.709946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.709959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183c00 00:17:34.644 [2024-04-24 17:21:43.709970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.709982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183c00 00:17:34.644 [2024-04-24 17:21:43.709993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.710006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183c00 00:17:34.644 [2024-04-24 17:21:43.710016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.710029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x183c00 00:17:34.644 [2024-04-24 17:21:43.710039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.710052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183c00 00:17:34.644 [2024-04-24 17:21:43.710064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.710077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183c00 00:17:34.644 [2024-04-24 17:21:43.710088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.710101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183000 00:17:34.644 [2024-04-24 17:21:43.710111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.710123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5dff80 len:0x10000 key:0x183000 00:17:34.644 [2024-04-24 17:21:43.710133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.710146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5cff00 len:0x10000 key:0x183000 00:17:34.644 [2024-04-24 17:21:43.710157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.710170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5bfe80 len:0x10000 key:0x183000 00:17:34.644 [2024-04-24 17:21:43.710180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.710192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0efe00 len:0x10000 key:0x182d00 00:17:34.644 [2024-04-24 17:21:43.710203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32624 cdw0:fa903430 sqhd:108c p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.711981] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:17:34.644 [2024-04-24 17:21:43.712007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183000 00:17:34.644 [2024-04-24 17:21:43.712018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183000 00:17:34.644 [2024-04-24 17:21:43.712045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183000 00:17:34.644 [2024-04-24 17:21:43.712069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x183000 00:17:34.644 [2024-04-24 17:21:43.712093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183000 00:17:34.644 [2024-04-24 17:21:43.712119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183000 00:17:34.644 [2024-04-24 17:21:43.712143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x183000 00:17:34.644 [2024-04-24 17:21:43.712166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183000 00:17:34.644 [2024-04-24 17:21:43.712189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183000 00:17:34.644 [2024-04-24 17:21:43.712212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183000 00:17:34.644 [2024-04-24 17:21:43.712235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183800 00:17:34.644 [2024-04-24 17:21:43.712261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183800 00:17:34.644 [2024-04-24 17:21:43.712285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183800 00:17:34.644 [2024-04-24 17:21:43.712308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183800 00:17:34.644 [2024-04-24 17:21:43.712331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183800 00:17:34.644 [2024-04-24 17:21:43.712354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183800 00:17:34.644 [2024-04-24 17:21:43.712380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183800 00:17:34.644 [2024-04-24 17:21:43.712403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183800 00:17:34.644 [2024-04-24 17:21:43.712426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183800 00:17:34.644 [2024-04-24 17:21:43.712449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183800 00:17:34.644 [2024-04-24 17:21:43.712473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183800 00:17:34.644 [2024-04-24 17:21:43.712496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.644 [2024-04-24 17:21:43.712509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183800 00:17:34.644 [2024-04-24 17:21:43.712519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183800 00:17:34.645 [2024-04-24 17:21:43.712970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.712983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.712993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x183d00 00:17:34.645 [2024-04-24 17:21:43.713365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.645 [2024-04-24 17:21:43.713377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x183d00 00:17:34.646 [2024-04-24 17:21:43.713387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.646 [2024-04-24 17:21:43.713400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x183d00 00:17:34.646 [2024-04-24 17:21:43.713411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.646 [2024-04-24 17:21:43.713423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8bf680 len:0x10000 key:0x183d00 00:17:34.646 [2024-04-24 17:21:43.713435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.646 [2024-04-24 17:21:43.713448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8af600 len:0x10000 key:0x183d00 00:17:34.646 [2024-04-24 17:21:43.713458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.646 [2024-04-24 17:21:43.713473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b89f580 len:0x10000 key:0x183d00 00:17:34.646 [2024-04-24 17:21:43.713484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.646 [2024-04-24 17:21:43.713496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x183000 00:17:34.646 [2024-04-24 17:21:43.713507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:d820 p:0 m:0 dnr:0 00:17:34.646 [2024-04-24 17:21:43.716246] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192566c0 was disconnected and freed. reset controller. 00:17:34.646 [2024-04-24 17:21:43.716323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.716336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.646 [2024-04-24 17:21:43.716348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.716359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.646 [2024-04-24 17:21:43.716369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.716380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.646 [2024-04-24 17:21:43.716391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.716400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.646 [2024-04-24 17:21:43.718318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:34.646 [2024-04-24 17:21:43.718335] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:34.646 [2024-04-24 17:21:43.718345] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:34.646 [2024-04-24 17:21:43.718363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.718374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.718385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.718395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.718406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.718416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.718430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.718441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.720563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:34.646 [2024-04-24 17:21:43.720578] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:34.646 [2024-04-24 17:21:43.720587] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:34.646 [2024-04-24 17:21:43.720604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.720615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.720626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.720636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.720648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.720658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.720669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.720679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.722799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:34.646 [2024-04-24 17:21:43.722813] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:34.646 [2024-04-24 17:21:43.722822] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:34.646 [2024-04-24 17:21:43.722865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.722876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.722888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.722898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.722909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.722919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.722930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.722940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.724515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:34.646 [2024-04-24 17:21:43.724529] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:34.646 [2024-04-24 17:21:43.724542] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:34.646 [2024-04-24 17:21:43.724558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.724568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.724579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.724590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.724600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.724610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.724621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.646 [2024-04-24 17:21:43.724631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.646 [2024-04-24 17:21:43.726398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:34.646 [2024-04-24 17:21:43.726412] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:17:34.646 [2024-04-24 17:21:43.726421] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:34.647 [2024-04-24 17:21:43.726438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.726448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.726460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.726470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.726480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.726491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.726502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.726512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.728516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:34.647 [2024-04-24 17:21:43.728547] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:17:34.647 [2024-04-24 17:21:43.728565] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:34.647 [2024-04-24 17:21:43.728598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.728619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.728642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.728670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.728693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.728725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.728737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.728747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.730634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:34.647 [2024-04-24 17:21:43.730663] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:34.647 [2024-04-24 17:21:43.730682] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:34.647 [2024-04-24 17:21:43.730715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.730736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.730758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.730779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.730801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.730822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.730885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.730906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.732625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:34.647 [2024-04-24 17:21:43.732655] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:34.647 [2024-04-24 17:21:43.732674] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:34.647 [2024-04-24 17:21:43.732706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.732727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.732750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.732771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.732794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.732823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.732840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.732854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.734452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:34.647 [2024-04-24 17:21:43.734481] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:17:34.647 [2024-04-24 17:21:43.734500] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:34.647 [2024-04-24 17:21:43.734532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.734553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.734577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.734597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.734619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.734640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.734662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.647 [2024-04-24 17:21:43.734683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21787 cdw0:0 sqhd:d800 p:1 m:1 dnr:0 00:17:34.647 [2024-04-24 17:21:43.755195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:34.647 [2024-04-24 17:21:43.755236] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:17:34.647 [2024-04-24 17:21:43.755256] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:34.647 [2024-04-24 17:21:43.758182] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:34.647 [2024-04-24 17:21:43.758200] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:17:34.647 [2024-04-24 17:21:43.758208] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:34.647 [2024-04-24 17:21:43.758215] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:17:34.647 [2024-04-24 17:21:43.758222] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:34.647 [2024-04-24 17:21:43.758230] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:17:34.647 [2024-04-24 17:21:43.758281] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:34.647 [2024-04-24 17:21:43.758293] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:34.647 [2024-04-24 17:21:43.758301] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:34.647 [2024-04-24 17:21:43.758310] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:34.647 [2024-04-24 17:21:43.758440] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:17:34.647 [2024-04-24 17:21:43.758451] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:17:34.647 [2024-04-24 17:21:43.758459] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:17:34.647 [2024-04-24 17:21:43.758468] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:17:34.647 task offset: 40960 on job bdev=Nvme6n1 fails 00:17:34.647 00:17:34.647 Latency(us) 00:17:34.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.647 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:34.647 Job: Nvme1n1 ended in about 1.88 seconds with error 00:17:34.647 Verification LBA range: start 0x0 length 0x400 00:17:34.647 Nvme1n1 : 1.88 135.84 8.49 33.96 0.00 372891.65 33953.89 1062557.01 00:17:34.647 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:34.647 Job: Nvme2n1 ended in about 1.89 seconds with error 00:17:34.647 Verification LBA range: start 0x0 length 0x400 00:17:34.647 Nvme2n1 : 1.89 135.74 8.48 33.94 0.00 369616.02 36450.50 1062557.01 00:17:34.647 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:34.647 Job: Nvme3n1 ended in about 1.89 seconds with error 00:17:34.647 Verification LBA range: start 0x0 length 0x400 00:17:34.647 Nvme3n1 : 1.89 135.65 8.48 33.91 0.00 366793.29 42192.70 1062557.01 00:17:34.647 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:34.647 Job: Nvme4n1 ended in about 1.89 seconds with error 00:17:34.647 Verification LBA range: start 0x0 length 0x400 00:17:34.647 Nvme4n1 : 1.89 151.46 9.47 33.89 0.00 332605.19 5336.50 1062557.01 00:17:34.647 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:34.647 Job: Nvme5n1 ended in about 1.89 seconds with error 00:17:34.647 Verification LBA range: start 0x0 length 0x400 00:17:34.647 Nvme5n1 : 1.89 139.72 8.73 33.87 0.00 352059.43 9299.87 1062557.01 00:17:34.647 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:34.647 Job: Nvme6n1 ended in about 1.89 seconds with error 00:17:34.647 Verification LBA range: start 0x0 length 0x400 00:17:34.648 Nvme6n1 : 1.89 143.33 8.96 33.85 0.00 341801.96 11234.74 1054567.86 00:17:34.648 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:34.648 Job: Nvme7n1 ended in about 1.89 seconds with error 00:17:34.648 Verification LBA range: start 0x0 length 0x400 00:17:34.648 Nvme7n1 : 1.89 152.23 9.51 33.83 0.00 322407.33 16976.94 1054567.86 00:17:34.648 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:34.648 Job: Nvme8n1 ended in about 1.89 seconds with error 00:17:34.648 Verification LBA range: start 0x0 length 0x400 00:17:34.648 Nvme8n1 : 1.89 144.74 9.05 33.81 0.00 332738.59 24092.28 1046578.71 00:17:34.648 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:34.648 Job: Nvme9n1 ended in about 1.88 seconds with error 00:17:34.648 Verification LBA range: start 0x0 length 0x400 00:17:34.648 Nvme9n1 : 1.88 136.49 8.53 34.12 0.00 347225.33 57671.68 1110491.92 00:17:34.648 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:34.648 Job: Nvme10n1 ended in about 1.88 seconds with error 00:17:34.648 Verification LBA range: start 0x0 length 0x400 00:17:34.648 Nvme10n1 : 1.88 102.32 6.40 34.11 0.00 430209.71 62664.90 1094513.62 00:17:34.648 =================================================================================================================== 00:17:34.648 Total : 1377.54 86.10 339.29 0.00 354605.15 5336.50 1110491.92 00:17:34.648 [2024-04-24 17:21:43.797044] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:34.648 [2024-04-24 17:21:43.802111] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:34.648 [2024-04-24 17:21:43.802129] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:34.648 [2024-04-24 17:21:43.802134] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd440 00:17:34.648 [2024-04-24 17:21:43.802221] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:34.648 [2024-04-24 17:21:43.802231] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:34.648 [2024-04-24 17:21:43.802236] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c60c0 00:17:34.648 [2024-04-24 17:21:43.802315] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:34.648 [2024-04-24 17:21:43.802321] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:34.648 [2024-04-24 17:21:43.802326] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c00 00:17:34.648 [2024-04-24 17:21:43.802409] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:34.648 [2024-04-24 17:21:43.802416] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:34.648 [2024-04-24 17:21:43.802421] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba540 00:17:34.648 [2024-04-24 17:21:43.802511] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:34.648 [2024-04-24 17:21:43.802518] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:34.648 [2024-04-24 17:21:43.802523] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5380 00:17:34.648 [2024-04-24 17:21:43.802621] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:34.648 [2024-04-24 17:21:43.802628] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:34.648 [2024-04-24 17:21:43.802633] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:17:34.648 [2024-04-24 17:21:43.803579] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:34.648 [2024-04-24 17:21:43.803590] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:34.648 [2024-04-24 17:21:43.803595] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192b54c0 00:17:34.648 [2024-04-24 17:21:43.803680] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:34.648 [2024-04-24 17:21:43.803688] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:34.648 [2024-04-24 17:21:43.803693] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c140 00:17:34.648 [2024-04-24 17:21:43.803772] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:34.648 [2024-04-24 17:21:43.803780] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:34.648 [2024-04-24 17:21:43.803785] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019298cc0 00:17:34.648 [2024-04-24 17:21:43.803874] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:34.648 [2024-04-24 17:21:43.803881] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:34.648 [2024-04-24 17:21:43.803886] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f280 00:17:34.906 17:21:44 -- target/shutdown.sh@142 -- # kill -9 3028125 00:17:34.906 17:21:44 -- target/shutdown.sh@144 -- # stoptarget 00:17:35.165 17:21:44 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:35.165 17:21:44 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:35.165 17:21:44 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:35.165 17:21:44 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:35.165 17:21:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:35.165 17:21:44 -- nvmf/common.sh@117 -- # sync 00:17:35.165 17:21:44 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:35.165 17:21:44 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:35.165 17:21:44 -- nvmf/common.sh@120 -- # set +e 00:17:35.165 17:21:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.165 17:21:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:35.165 rmmod nvme_rdma 00:17:35.165 rmmod nvme_fabrics 00:17:35.165 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 3028125 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:17:35.165 17:21:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.165 17:21:44 -- nvmf/common.sh@124 -- # set -e 00:17:35.165 17:21:44 -- nvmf/common.sh@125 -- # return 0 00:17:35.165 17:21:44 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:35.165 17:21:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:35.165 17:21:44 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:35.165 00:17:35.165 real 0m5.131s 00:17:35.165 user 0m17.650s 00:17:35.165 sys 0m1.076s 00:17:35.165 17:21:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:35.165 17:21:44 -- common/autotest_common.sh@10 -- # set +x 00:17:35.165 ************************************ 00:17:35.165 END TEST nvmf_shutdown_tc3 00:17:35.165 ************************************ 00:17:35.165 17:21:44 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:17:35.165 00:17:35.165 real 0m23.432s 00:17:35.165 user 1m10.711s 00:17:35.165 sys 0m7.447s 00:17:35.165 17:21:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:35.165 17:21:44 -- common/autotest_common.sh@10 -- # set +x 00:17:35.165 ************************************ 00:17:35.165 END TEST nvmf_shutdown 00:17:35.165 ************************************ 00:17:35.165 17:21:44 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:17:35.165 17:21:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:35.165 17:21:44 -- common/autotest_common.sh@10 -- # set +x 00:17:35.165 17:21:44 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:17:35.165 17:21:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:35.165 17:21:44 -- common/autotest_common.sh@10 -- # set +x 00:17:35.165 17:21:44 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:17:35.165 17:21:44 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:17:35.165 17:21:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:35.165 17:21:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:35.165 17:21:44 -- common/autotest_common.sh@10 -- # set +x 00:17:35.424 ************************************ 00:17:35.424 START TEST nvmf_multicontroller 00:17:35.424 ************************************ 00:17:35.424 17:21:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:17:35.424 * Looking for test storage... 00:17:35.424 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:35.424 17:21:44 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.424 17:21:44 -- nvmf/common.sh@7 -- # uname -s 00:17:35.424 17:21:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.424 17:21:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.424 17:21:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.424 17:21:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.424 17:21:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.424 17:21:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.424 17:21:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.424 17:21:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.424 17:21:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.424 17:21:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.424 17:21:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:35.424 17:21:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:17:35.424 17:21:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.424 17:21:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.424 17:21:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.424 17:21:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.424 17:21:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:35.424 17:21:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.424 17:21:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.424 17:21:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.424 17:21:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.424 17:21:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.424 17:21:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.424 17:21:44 -- paths/export.sh@5 -- # export PATH 00:17:35.424 17:21:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.424 17:21:44 -- nvmf/common.sh@47 -- # : 0 00:17:35.424 17:21:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.424 17:21:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.424 17:21:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.424 17:21:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.424 17:21:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.424 17:21:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.424 17:21:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.424 17:21:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.424 17:21:44 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:35.424 17:21:44 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:35.424 17:21:44 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:35.424 17:21:44 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:35.424 17:21:44 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:35.424 17:21:44 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:17:35.424 17:21:44 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:17:35.424 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:17:35.424 17:21:44 -- host/multicontroller.sh@20 -- # exit 0 00:17:35.424 00:17:35.424 real 0m0.125s 00:17:35.424 user 0m0.067s 00:17:35.424 sys 0m0.066s 00:17:35.424 17:21:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:35.424 17:21:44 -- common/autotest_common.sh@10 -- # set +x 00:17:35.424 ************************************ 00:17:35.424 END TEST nvmf_multicontroller 00:17:35.424 ************************************ 00:17:35.424 17:21:44 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:17:35.424 17:21:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:35.424 17:21:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:35.424 17:21:44 -- common/autotest_common.sh@10 -- # set +x 00:17:35.684 ************************************ 00:17:35.684 START TEST nvmf_aer 00:17:35.684 ************************************ 00:17:35.684 17:21:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:17:35.684 * Looking for test storage... 00:17:35.684 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:35.684 17:21:44 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.684 17:21:44 -- nvmf/common.sh@7 -- # uname -s 00:17:35.684 17:21:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.684 17:21:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.684 17:21:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.684 17:21:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.684 17:21:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.684 17:21:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.684 17:21:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.684 17:21:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.684 17:21:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.684 17:21:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.684 17:21:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:35.684 17:21:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:17:35.684 17:21:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.684 17:21:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.684 17:21:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.684 17:21:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.684 17:21:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:35.684 17:21:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.684 17:21:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.684 17:21:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.684 17:21:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.684 17:21:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.684 17:21:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.684 17:21:44 -- paths/export.sh@5 -- # export PATH 00:17:35.684 17:21:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.684 17:21:44 -- nvmf/common.sh@47 -- # : 0 00:17:35.684 17:21:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.684 17:21:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.684 17:21:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.684 17:21:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.684 17:21:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.684 17:21:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.685 17:21:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.685 17:21:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.685 17:21:44 -- host/aer.sh@11 -- # nvmftestinit 00:17:35.685 17:21:44 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:35.685 17:21:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.685 17:21:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:35.685 17:21:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:35.685 17:21:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:35.685 17:21:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.685 17:21:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.685 17:21:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.685 17:21:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:35.685 17:21:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:35.685 17:21:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:35.685 17:21:44 -- common/autotest_common.sh@10 -- # set +x 00:17:42.248 17:21:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:42.248 17:21:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:42.248 17:21:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:42.248 17:21:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:42.248 17:21:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:42.248 17:21:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:42.248 17:21:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:42.248 17:21:50 -- nvmf/common.sh@295 -- # net_devs=() 00:17:42.248 17:21:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:42.248 17:21:50 -- nvmf/common.sh@296 -- # e810=() 00:17:42.248 17:21:50 -- nvmf/common.sh@296 -- # local -ga e810 00:17:42.248 17:21:50 -- nvmf/common.sh@297 -- # x722=() 00:17:42.248 17:21:50 -- nvmf/common.sh@297 -- # local -ga x722 00:17:42.248 17:21:50 -- nvmf/common.sh@298 -- # mlx=() 00:17:42.248 17:21:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:42.248 17:21:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.248 17:21:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.248 17:21:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.248 17:21:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.248 17:21:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.248 17:21:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.248 17:21:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.248 17:21:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.248 17:21:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.248 17:21:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.248 17:21:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.248 17:21:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:42.248 17:21:50 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:42.248 17:21:50 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:42.248 17:21:50 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:42.248 17:21:50 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:42.248 17:21:50 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:42.248 17:21:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:42.248 17:21:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.248 17:21:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:17:42.248 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:17:42.248 17:21:50 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:42.248 17:21:50 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:42.248 17:21:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:42.248 17:21:50 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:42.248 17:21:50 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:42.248 17:21:50 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:42.248 17:21:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.248 17:21:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:17:42.248 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:17:42.249 17:21:50 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:42.249 17:21:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:42.249 17:21:50 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.249 17:21:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.249 17:21:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:42.249 17:21:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.249 17:21:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:17:42.249 Found net devices under 0000:da:00.0: mlx_0_0 00:17:42.249 17:21:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.249 17:21:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.249 17:21:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.249 17:21:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:42.249 17:21:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.249 17:21:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:17:42.249 Found net devices under 0000:da:00.1: mlx_0_1 00:17:42.249 17:21:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.249 17:21:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:42.249 17:21:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:42.249 17:21:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:42.249 17:21:50 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:42.249 17:21:50 -- nvmf/common.sh@58 -- # uname 00:17:42.249 17:21:50 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:42.249 17:21:50 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:42.249 17:21:50 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:42.249 17:21:50 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:42.249 17:21:50 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:42.249 17:21:50 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:42.249 17:21:50 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:42.249 17:21:50 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:42.249 17:21:50 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:42.249 17:21:50 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:42.249 17:21:50 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:42.249 17:21:50 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:42.249 17:21:50 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:42.249 17:21:50 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:42.249 17:21:50 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:42.249 17:21:50 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:42.249 17:21:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:42.249 17:21:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.249 17:21:50 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:42.249 17:21:50 -- nvmf/common.sh@105 -- # continue 2 00:17:42.249 17:21:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:42.249 17:21:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.249 17:21:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.249 17:21:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:42.249 17:21:50 -- nvmf/common.sh@105 -- # continue 2 00:17:42.249 17:21:50 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:42.249 17:21:50 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:42.249 17:21:50 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:42.249 17:21:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:42.249 17:21:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:42.249 17:21:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:42.249 17:21:50 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:42.249 17:21:50 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:42.249 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:42.249 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:17:42.249 altname enp218s0f0np0 00:17:42.249 altname ens818f0np0 00:17:42.249 inet 192.168.100.8/24 scope global mlx_0_0 00:17:42.249 valid_lft forever preferred_lft forever 00:17:42.249 17:21:50 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:42.249 17:21:50 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:42.249 17:21:50 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:42.249 17:21:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:42.249 17:21:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:42.249 17:21:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:42.249 17:21:50 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:42.249 17:21:50 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:42.249 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:42.249 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:17:42.249 altname enp218s0f1np1 00:17:42.249 altname ens818f1np1 00:17:42.249 inet 192.168.100.9/24 scope global mlx_0_1 00:17:42.249 valid_lft forever preferred_lft forever 00:17:42.249 17:21:50 -- nvmf/common.sh@411 -- # return 0 00:17:42.249 17:21:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:42.249 17:21:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:42.249 17:21:50 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:42.249 17:21:50 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:42.249 17:21:50 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:42.249 17:21:50 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:42.249 17:21:50 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:42.249 17:21:50 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:42.249 17:21:50 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:42.249 17:21:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:42.249 17:21:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.249 17:21:50 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:42.249 17:21:50 -- nvmf/common.sh@105 -- # continue 2 00:17:42.249 17:21:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:42.249 17:21:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.249 17:21:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.249 17:21:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:42.249 17:21:50 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:42.249 17:21:50 -- nvmf/common.sh@105 -- # continue 2 00:17:42.249 17:21:50 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:42.249 17:21:50 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:42.249 17:21:50 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:42.249 17:21:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:42.249 17:21:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:42.249 17:21:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:42.249 17:21:50 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:42.249 17:21:50 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:42.249 17:21:50 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:42.249 17:21:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:42.249 17:21:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:42.249 17:21:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:42.249 17:21:50 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:42.249 192.168.100.9' 00:17:42.249 17:21:50 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:42.249 192.168.100.9' 00:17:42.249 17:21:50 -- nvmf/common.sh@446 -- # head -n 1 00:17:42.249 17:21:50 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:42.249 17:21:50 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:42.249 192.168.100.9' 00:17:42.249 17:21:50 -- nvmf/common.sh@447 -- # tail -n +2 00:17:42.249 17:21:50 -- nvmf/common.sh@447 -- # head -n 1 00:17:42.249 17:21:50 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:42.249 17:21:50 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:42.249 17:21:50 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:42.249 17:21:50 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:42.249 17:21:50 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:42.249 17:21:50 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:42.249 17:21:50 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:42.249 17:21:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:42.249 17:21:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:42.249 17:21:50 -- common/autotest_common.sh@10 -- # set +x 00:17:42.249 17:21:50 -- nvmf/common.sh@470 -- # nvmfpid=3030503 00:17:42.249 17:21:50 -- nvmf/common.sh@471 -- # waitforlisten 3030503 00:17:42.249 17:21:50 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:42.249 17:21:50 -- common/autotest_common.sh@817 -- # '[' -z 3030503 ']' 00:17:42.249 17:21:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.249 17:21:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:42.249 17:21:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.249 17:21:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:42.249 17:21:50 -- common/autotest_common.sh@10 -- # set +x 00:17:42.249 [2024-04-24 17:21:50.465073] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:17:42.250 [2024-04-24 17:21:50.465124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.250 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.250 [2024-04-24 17:21:50.522337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:42.250 [2024-04-24 17:21:50.608004] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.250 [2024-04-24 17:21:50.608042] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.250 [2024-04-24 17:21:50.608049] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.250 [2024-04-24 17:21:50.608055] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.250 [2024-04-24 17:21:50.608060] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.250 [2024-04-24 17:21:50.608096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.250 [2024-04-24 17:21:50.608195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.250 [2024-04-24 17:21:50.608281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:42.250 [2024-04-24 17:21:50.608283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.250 17:21:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:42.250 17:21:51 -- common/autotest_common.sh@850 -- # return 0 00:17:42.250 17:21:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:42.250 17:21:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:42.250 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.250 17:21:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.250 17:21:51 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:42.250 17:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.250 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.250 [2024-04-24 17:21:51.353411] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16d7f60/0x16dc450) succeed. 00:17:42.250 [2024-04-24 17:21:51.363582] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16d9550/0x171dae0) succeed. 00:17:42.250 17:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.250 17:21:51 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:42.250 17:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.250 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.507 Malloc0 00:17:42.507 17:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.507 17:21:51 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:42.507 17:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.507 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.507 17:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.507 17:21:51 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:42.507 17:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.507 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.507 17:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.507 17:21:51 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:42.507 17:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.507 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.507 [2024-04-24 17:21:51.529827] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:42.507 17:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.507 17:21:51 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:42.507 17:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.507 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.507 [2024-04-24 17:21:51.537436] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:42.507 [ 00:17:42.507 { 00:17:42.507 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:42.507 "subtype": "Discovery", 00:17:42.507 "listen_addresses": [], 00:17:42.507 "allow_any_host": true, 00:17:42.507 "hosts": [] 00:17:42.507 }, 00:17:42.507 { 00:17:42.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.507 "subtype": "NVMe", 00:17:42.507 "listen_addresses": [ 00:17:42.507 { 00:17:42.507 "transport": "RDMA", 00:17:42.507 "trtype": "RDMA", 00:17:42.507 "adrfam": "IPv4", 00:17:42.507 "traddr": "192.168.100.8", 00:17:42.507 "trsvcid": "4420" 00:17:42.507 } 00:17:42.507 ], 00:17:42.507 "allow_any_host": true, 00:17:42.507 "hosts": [], 00:17:42.507 "serial_number": "SPDK00000000000001", 00:17:42.507 "model_number": "SPDK bdev Controller", 00:17:42.507 "max_namespaces": 2, 00:17:42.507 "min_cntlid": 1, 00:17:42.507 "max_cntlid": 65519, 00:17:42.507 "namespaces": [ 00:17:42.507 { 00:17:42.507 "nsid": 1, 00:17:42.507 "bdev_name": "Malloc0", 00:17:42.507 "name": "Malloc0", 00:17:42.507 "nguid": "B46077AA356841E5954BE1B24A45E4FC", 00:17:42.507 "uuid": "b46077aa-3568-41e5-954b-e1b24a45e4fc" 00:17:42.507 } 00:17:42.507 ] 00:17:42.507 } 00:17:42.507 ] 00:17:42.507 17:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.507 17:21:51 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:42.507 17:21:51 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:42.507 17:21:51 -- host/aer.sh@33 -- # aerpid=3030544 00:17:42.507 17:21:51 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:42.507 17:21:51 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:42.507 17:21:51 -- common/autotest_common.sh@1251 -- # local i=0 00:17:42.507 17:21:51 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:42.507 17:21:51 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:17:42.507 17:21:51 -- common/autotest_common.sh@1254 -- # i=1 00:17:42.507 17:21:51 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:17:42.507 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.507 17:21:51 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:42.507 17:21:51 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:17:42.507 17:21:51 -- common/autotest_common.sh@1254 -- # i=2 00:17:42.507 17:21:51 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:17:42.765 17:21:51 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:42.765 17:21:51 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:42.765 17:21:51 -- common/autotest_common.sh@1262 -- # return 0 00:17:42.765 17:21:51 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:42.765 17:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.765 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.765 Malloc1 00:17:42.765 17:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.765 17:21:51 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:42.765 17:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.765 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.765 17:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.765 17:21:51 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:42.765 17:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.765 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.765 [ 00:17:42.765 { 00:17:42.765 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:42.765 "subtype": "Discovery", 00:17:42.765 "listen_addresses": [], 00:17:42.766 "allow_any_host": true, 00:17:42.766 "hosts": [] 00:17:42.766 }, 00:17:42.766 { 00:17:42.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.766 "subtype": "NVMe", 00:17:42.766 "listen_addresses": [ 00:17:42.766 { 00:17:42.766 "transport": "RDMA", 00:17:42.766 "trtype": "RDMA", 00:17:42.766 "adrfam": "IPv4", 00:17:42.766 "traddr": "192.168.100.8", 00:17:42.766 "trsvcid": "4420" 00:17:42.766 } 00:17:42.766 ], 00:17:42.766 "allow_any_host": true, 00:17:42.766 "hosts": [], 00:17:42.766 "serial_number": "SPDK00000000000001", 00:17:42.766 "model_number": "SPDK bdev Controller", 00:17:42.766 "max_namespaces": 2, 00:17:42.766 "min_cntlid": 1, 00:17:42.766 "max_cntlid": 65519, 00:17:42.766 "namespaces": [ 00:17:42.766 { 00:17:42.766 "nsid": 1, 00:17:42.766 "bdev_name": "Malloc0", 00:17:42.766 "name": "Malloc0", 00:17:42.766 "nguid": "B46077AA356841E5954BE1B24A45E4FC", 00:17:42.766 "uuid": "b46077aa-3568-41e5-954b-e1b24a45e4fc" 00:17:42.766 }, 00:17:42.766 { 00:17:42.766 "nsid": 2, 00:17:42.766 "bdev_name": "Malloc1", 00:17:42.766 "name": "Malloc1", 00:17:42.766 "nguid": "E445D58D03694E608E2F5E67BF3D5204", 00:17:42.766 "uuid": "e445d58d-0369-4e60-8e2f-5e67bf3d5204" 00:17:42.766 } 00:17:42.766 ] 00:17:42.766 } 00:17:42.766 ] 00:17:42.766 17:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.766 17:21:51 -- host/aer.sh@43 -- # wait 3030544 00:17:42.766 Asynchronous Event Request test 00:17:42.766 Attaching to 192.168.100.8 00:17:42.766 Attached to 192.168.100.8 00:17:42.766 Registering asynchronous event callbacks... 00:17:42.766 Starting namespace attribute notice tests for all controllers... 00:17:42.766 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:42.766 aer_cb - Changed Namespace 00:17:42.766 Cleaning up... 00:17:42.766 17:21:51 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:42.766 17:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.766 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.766 17:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.766 17:21:51 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:42.766 17:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.766 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.766 17:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.766 17:21:51 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.766 17:21:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.766 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.766 17:21:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.766 17:21:51 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:42.766 17:21:51 -- host/aer.sh@51 -- # nvmftestfini 00:17:42.766 17:21:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:42.766 17:21:51 -- nvmf/common.sh@117 -- # sync 00:17:42.766 17:21:51 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:42.766 17:21:51 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:42.766 17:21:51 -- nvmf/common.sh@120 -- # set +e 00:17:42.766 17:21:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:42.766 17:21:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:42.766 rmmod nvme_rdma 00:17:42.766 rmmod nvme_fabrics 00:17:42.766 17:21:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:42.766 17:21:51 -- nvmf/common.sh@124 -- # set -e 00:17:42.766 17:21:51 -- nvmf/common.sh@125 -- # return 0 00:17:42.766 17:21:51 -- nvmf/common.sh@478 -- # '[' -n 3030503 ']' 00:17:42.766 17:21:51 -- nvmf/common.sh@479 -- # killprocess 3030503 00:17:42.766 17:21:51 -- common/autotest_common.sh@936 -- # '[' -z 3030503 ']' 00:17:42.766 17:21:51 -- common/autotest_common.sh@940 -- # kill -0 3030503 00:17:42.766 17:21:51 -- common/autotest_common.sh@941 -- # uname 00:17:42.766 17:21:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:42.766 17:21:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3030503 00:17:42.766 17:21:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:42.766 17:21:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:42.766 17:21:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3030503' 00:17:42.766 killing process with pid 3030503 00:17:42.766 17:21:51 -- common/autotest_common.sh@955 -- # kill 3030503 00:17:42.766 [2024-04-24 17:21:52.000444] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:42.766 17:21:51 -- common/autotest_common.sh@960 -- # wait 3030503 00:17:43.332 17:21:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:43.332 17:21:52 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:43.332 00:17:43.332 real 0m7.601s 00:17:43.332 user 0m8.158s 00:17:43.332 sys 0m4.695s 00:17:43.332 17:21:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:43.332 17:21:52 -- common/autotest_common.sh@10 -- # set +x 00:17:43.332 ************************************ 00:17:43.332 END TEST nvmf_aer 00:17:43.332 ************************************ 00:17:43.332 17:21:52 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:17:43.332 17:21:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:43.332 17:21:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:43.332 17:21:52 -- common/autotest_common.sh@10 -- # set +x 00:17:43.332 ************************************ 00:17:43.332 START TEST nvmf_async_init 00:17:43.332 ************************************ 00:17:43.332 17:21:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:17:43.332 * Looking for test storage... 00:17:43.332 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:43.332 17:21:52 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.332 17:21:52 -- nvmf/common.sh@7 -- # uname -s 00:17:43.332 17:21:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.332 17:21:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.332 17:21:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.332 17:21:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.332 17:21:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.332 17:21:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.332 17:21:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.332 17:21:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.332 17:21:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.332 17:21:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.332 17:21:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:43.332 17:21:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:17:43.332 17:21:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.332 17:21:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.332 17:21:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.332 17:21:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.332 17:21:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:43.332 17:21:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.332 17:21:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.332 17:21:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.332 17:21:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.333 17:21:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.333 17:21:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.333 17:21:52 -- paths/export.sh@5 -- # export PATH 00:17:43.333 17:21:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.333 17:21:52 -- nvmf/common.sh@47 -- # : 0 00:17:43.333 17:21:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:43.333 17:21:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:43.333 17:21:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.333 17:21:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.333 17:21:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.333 17:21:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:43.333 17:21:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:43.333 17:21:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:43.333 17:21:52 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:43.333 17:21:52 -- host/async_init.sh@14 -- # null_block_size=512 00:17:43.333 17:21:52 -- host/async_init.sh@15 -- # null_bdev=null0 00:17:43.333 17:21:52 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:43.333 17:21:52 -- host/async_init.sh@20 -- # uuidgen 00:17:43.333 17:21:52 -- host/async_init.sh@20 -- # tr -d - 00:17:43.333 17:21:52 -- host/async_init.sh@20 -- # nguid=5622f3730bc84d548a331e5cafd8cf96 00:17:43.333 17:21:52 -- host/async_init.sh@22 -- # nvmftestinit 00:17:43.333 17:21:52 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:43.333 17:21:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.333 17:21:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:43.333 17:21:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:43.333 17:21:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:43.333 17:21:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.333 17:21:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.333 17:21:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.333 17:21:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:43.333 17:21:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:43.333 17:21:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:43.333 17:21:52 -- common/autotest_common.sh@10 -- # set +x 00:17:49.894 17:21:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:49.894 17:21:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:49.894 17:21:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:49.894 17:21:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:49.894 17:21:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:49.894 17:21:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:49.894 17:21:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:49.894 17:21:57 -- nvmf/common.sh@295 -- # net_devs=() 00:17:49.894 17:21:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:49.894 17:21:57 -- nvmf/common.sh@296 -- # e810=() 00:17:49.894 17:21:57 -- nvmf/common.sh@296 -- # local -ga e810 00:17:49.894 17:21:57 -- nvmf/common.sh@297 -- # x722=() 00:17:49.894 17:21:57 -- nvmf/common.sh@297 -- # local -ga x722 00:17:49.894 17:21:57 -- nvmf/common.sh@298 -- # mlx=() 00:17:49.894 17:21:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:49.894 17:21:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.895 17:21:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.895 17:21:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.895 17:21:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.895 17:21:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.895 17:21:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.895 17:21:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.895 17:21:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.895 17:21:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.895 17:21:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.895 17:21:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.895 17:21:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:49.895 17:21:57 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:49.895 17:21:57 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:49.895 17:21:57 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:49.895 17:21:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:49.895 17:21:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.895 17:21:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:17:49.895 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:17:49.895 17:21:57 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:49.895 17:21:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.895 17:21:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:17:49.895 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:17:49.895 17:21:57 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:49.895 17:21:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:49.895 17:21:57 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.895 17:21:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.895 17:21:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:49.895 17:21:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.895 17:21:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:17:49.895 Found net devices under 0000:da:00.0: mlx_0_0 00:17:49.895 17:21:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.895 17:21:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.895 17:21:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.895 17:21:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:49.895 17:21:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.895 17:21:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:17:49.895 Found net devices under 0000:da:00.1: mlx_0_1 00:17:49.895 17:21:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.895 17:21:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:49.895 17:21:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:49.895 17:21:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:49.895 17:21:57 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:49.895 17:21:57 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:49.895 17:21:57 -- nvmf/common.sh@58 -- # uname 00:17:49.895 17:21:57 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:49.895 17:21:57 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:49.895 17:21:57 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:49.895 17:21:57 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:49.895 17:21:57 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:49.895 17:21:57 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:49.895 17:21:57 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:49.895 17:21:57 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:49.895 17:21:57 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:49.895 17:21:57 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:49.895 17:21:57 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:49.895 17:21:57 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:49.895 17:21:57 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:49.895 17:21:57 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:49.895 17:21:57 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:49.895 17:21:58 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:49.895 17:21:58 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:49.895 17:21:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:49.895 17:21:58 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:49.895 17:21:58 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:49.895 17:21:58 -- nvmf/common.sh@105 -- # continue 2 00:17:49.895 17:21:58 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:49.895 17:21:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:49.895 17:21:58 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:49.895 17:21:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:49.895 17:21:58 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:49.895 17:21:58 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:49.895 17:21:58 -- nvmf/common.sh@105 -- # continue 2 00:17:49.895 17:21:58 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:49.895 17:21:58 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:49.895 17:21:58 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:49.895 17:21:58 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:49.895 17:21:58 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:49.895 17:21:58 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:49.895 17:21:58 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:49.895 17:21:58 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:49.895 17:21:58 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:49.895 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:49.895 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:17:49.895 altname enp218s0f0np0 00:17:49.895 altname ens818f0np0 00:17:49.895 inet 192.168.100.8/24 scope global mlx_0_0 00:17:49.895 valid_lft forever preferred_lft forever 00:17:49.895 17:21:58 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:49.895 17:21:58 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:49.895 17:21:58 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:49.895 17:21:58 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:49.895 17:21:58 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:49.895 17:21:58 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:49.895 17:21:58 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:49.895 17:21:58 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:49.895 17:21:58 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:49.895 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:49.895 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:17:49.895 altname enp218s0f1np1 00:17:49.895 altname ens818f1np1 00:17:49.895 inet 192.168.100.9/24 scope global mlx_0_1 00:17:49.895 valid_lft forever preferred_lft forever 00:17:49.895 17:21:58 -- nvmf/common.sh@411 -- # return 0 00:17:49.895 17:21:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:49.895 17:21:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:49.895 17:21:58 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:49.895 17:21:58 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:49.895 17:21:58 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:49.895 17:21:58 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:49.895 17:21:58 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:49.895 17:21:58 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:49.895 17:21:58 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:49.895 17:21:58 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:49.895 17:21:58 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:49.895 17:21:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:49.895 17:21:58 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:49.895 17:21:58 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:49.895 17:21:58 -- nvmf/common.sh@105 -- # continue 2 00:17:49.895 17:21:58 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:49.895 17:21:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:49.895 17:21:58 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:49.895 17:21:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:49.895 17:21:58 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:49.895 17:21:58 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:49.895 17:21:58 -- nvmf/common.sh@105 -- # continue 2 00:17:49.895 17:21:58 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:49.895 17:21:58 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:49.895 17:21:58 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:49.895 17:21:58 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:49.895 17:21:58 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:49.895 17:21:58 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:49.895 17:21:58 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:49.895 17:21:58 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:49.895 17:21:58 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:49.895 17:21:58 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:49.895 17:21:58 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:49.895 17:21:58 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:49.895 17:21:58 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:49.895 192.168.100.9' 00:17:49.895 17:21:58 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:49.895 192.168.100.9' 00:17:49.895 17:21:58 -- nvmf/common.sh@446 -- # head -n 1 00:17:49.895 17:21:58 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:49.895 17:21:58 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:49.895 192.168.100.9' 00:17:49.896 17:21:58 -- nvmf/common.sh@447 -- # tail -n +2 00:17:49.896 17:21:58 -- nvmf/common.sh@447 -- # head -n 1 00:17:49.896 17:21:58 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:49.896 17:21:58 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:49.896 17:21:58 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:49.896 17:21:58 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:49.896 17:21:58 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:49.896 17:21:58 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:49.896 17:21:58 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:49.896 17:21:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:49.896 17:21:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:49.896 17:21:58 -- common/autotest_common.sh@10 -- # set +x 00:17:49.896 17:21:58 -- nvmf/common.sh@470 -- # nvmfpid=3032780 00:17:49.896 17:21:58 -- nvmf/common.sh@471 -- # waitforlisten 3032780 00:17:49.896 17:21:58 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:49.896 17:21:58 -- common/autotest_common.sh@817 -- # '[' -z 3032780 ']' 00:17:49.896 17:21:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.896 17:21:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:49.896 17:21:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.896 17:21:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:49.896 17:21:58 -- common/autotest_common.sh@10 -- # set +x 00:17:49.896 [2024-04-24 17:21:58.200965] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:17:49.896 [2024-04-24 17:21:58.201007] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.896 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.896 [2024-04-24 17:21:58.255562] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.896 [2024-04-24 17:21:58.330598] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.896 [2024-04-24 17:21:58.330639] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.896 [2024-04-24 17:21:58.330646] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.896 [2024-04-24 17:21:58.330652] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.896 [2024-04-24 17:21:58.330657] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.896 [2024-04-24 17:21:58.330675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.896 17:21:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:49.896 17:21:58 -- common/autotest_common.sh@850 -- # return 0 00:17:49.896 17:21:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:49.896 17:21:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:49.896 17:21:58 -- common/autotest_common.sh@10 -- # set +x 00:17:49.896 17:21:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.896 17:21:59 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:17:49.896 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.896 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:49.896 [2024-04-24 17:21:59.046762] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23c7d70/0x23cc260) succeed. 00:17:49.896 [2024-04-24 17:21:59.055355] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23c9270/0x240d8f0) succeed. 00:17:49.896 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.896 17:21:59 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:49.896 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.896 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:49.896 null0 00:17:49.896 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.896 17:21:59 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:49.896 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.896 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:49.896 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.896 17:21:59 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:49.896 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.896 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:49.896 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.896 17:21:59 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5622f3730bc84d548a331e5cafd8cf96 00:17:49.896 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.896 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:49.896 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.896 17:21:59 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:49.896 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.896 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:49.896 [2024-04-24 17:21:59.137054] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:49.896 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.896 17:21:59 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:49.896 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.896 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:50.154 nvme0n1 00:17:50.154 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.154 17:21:59 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:50.154 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.154 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:50.154 [ 00:17:50.154 { 00:17:50.154 "name": "nvme0n1", 00:17:50.154 "aliases": [ 00:17:50.154 "5622f373-0bc8-4d54-8a33-1e5cafd8cf96" 00:17:50.154 ], 00:17:50.154 "product_name": "NVMe disk", 00:17:50.154 "block_size": 512, 00:17:50.154 "num_blocks": 2097152, 00:17:50.154 "uuid": "5622f373-0bc8-4d54-8a33-1e5cafd8cf96", 00:17:50.154 "assigned_rate_limits": { 00:17:50.154 "rw_ios_per_sec": 0, 00:17:50.154 "rw_mbytes_per_sec": 0, 00:17:50.154 "r_mbytes_per_sec": 0, 00:17:50.154 "w_mbytes_per_sec": 0 00:17:50.154 }, 00:17:50.154 "claimed": false, 00:17:50.154 "zoned": false, 00:17:50.154 "supported_io_types": { 00:17:50.154 "read": true, 00:17:50.154 "write": true, 00:17:50.154 "unmap": false, 00:17:50.154 "write_zeroes": true, 00:17:50.154 "flush": true, 00:17:50.154 "reset": true, 00:17:50.154 "compare": true, 00:17:50.154 "compare_and_write": true, 00:17:50.154 "abort": true, 00:17:50.154 "nvme_admin": true, 00:17:50.154 "nvme_io": true 00:17:50.154 }, 00:17:50.154 "memory_domains": [ 00:17:50.154 { 00:17:50.154 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:50.154 "dma_device_type": 0 00:17:50.154 } 00:17:50.154 ], 00:17:50.154 "driver_specific": { 00:17:50.154 "nvme": [ 00:17:50.154 { 00:17:50.154 "trid": { 00:17:50.154 "trtype": "RDMA", 00:17:50.154 "adrfam": "IPv4", 00:17:50.154 "traddr": "192.168.100.8", 00:17:50.154 "trsvcid": "4420", 00:17:50.154 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:50.154 }, 00:17:50.154 "ctrlr_data": { 00:17:50.154 "cntlid": 1, 00:17:50.154 "vendor_id": "0x8086", 00:17:50.154 "model_number": "SPDK bdev Controller", 00:17:50.154 "serial_number": "00000000000000000000", 00:17:50.154 "firmware_revision": "24.05", 00:17:50.154 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:50.154 "oacs": { 00:17:50.154 "security": 0, 00:17:50.154 "format": 0, 00:17:50.154 "firmware": 0, 00:17:50.154 "ns_manage": 0 00:17:50.154 }, 00:17:50.154 "multi_ctrlr": true, 00:17:50.154 "ana_reporting": false 00:17:50.154 }, 00:17:50.154 "vs": { 00:17:50.154 "nvme_version": "1.3" 00:17:50.154 }, 00:17:50.154 "ns_data": { 00:17:50.154 "id": 1, 00:17:50.154 "can_share": true 00:17:50.154 } 00:17:50.154 } 00:17:50.154 ], 00:17:50.154 "mp_policy": "active_passive" 00:17:50.154 } 00:17:50.154 } 00:17:50.154 ] 00:17:50.154 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.154 17:21:59 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:50.154 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.154 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:50.154 [2024-04-24 17:21:59.234892] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:50.154 [2024-04-24 17:21:59.258934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:50.154 [2024-04-24 17:21:59.282796] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:50.154 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.154 17:21:59 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:50.154 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.154 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:50.154 [ 00:17:50.154 { 00:17:50.154 "name": "nvme0n1", 00:17:50.154 "aliases": [ 00:17:50.154 "5622f373-0bc8-4d54-8a33-1e5cafd8cf96" 00:17:50.154 ], 00:17:50.154 "product_name": "NVMe disk", 00:17:50.154 "block_size": 512, 00:17:50.154 "num_blocks": 2097152, 00:17:50.154 "uuid": "5622f373-0bc8-4d54-8a33-1e5cafd8cf96", 00:17:50.154 "assigned_rate_limits": { 00:17:50.154 "rw_ios_per_sec": 0, 00:17:50.154 "rw_mbytes_per_sec": 0, 00:17:50.154 "r_mbytes_per_sec": 0, 00:17:50.154 "w_mbytes_per_sec": 0 00:17:50.154 }, 00:17:50.154 "claimed": false, 00:17:50.154 "zoned": false, 00:17:50.154 "supported_io_types": { 00:17:50.154 "read": true, 00:17:50.154 "write": true, 00:17:50.154 "unmap": false, 00:17:50.154 "write_zeroes": true, 00:17:50.154 "flush": true, 00:17:50.154 "reset": true, 00:17:50.154 "compare": true, 00:17:50.155 "compare_and_write": true, 00:17:50.155 "abort": true, 00:17:50.155 "nvme_admin": true, 00:17:50.155 "nvme_io": true 00:17:50.155 }, 00:17:50.155 "memory_domains": [ 00:17:50.155 { 00:17:50.155 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:50.155 "dma_device_type": 0 00:17:50.155 } 00:17:50.155 ], 00:17:50.155 "driver_specific": { 00:17:50.155 "nvme": [ 00:17:50.155 { 00:17:50.155 "trid": { 00:17:50.155 "trtype": "RDMA", 00:17:50.155 "adrfam": "IPv4", 00:17:50.155 "traddr": "192.168.100.8", 00:17:50.155 "trsvcid": "4420", 00:17:50.155 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:50.155 }, 00:17:50.155 "ctrlr_data": { 00:17:50.155 "cntlid": 2, 00:17:50.155 "vendor_id": "0x8086", 00:17:50.155 "model_number": "SPDK bdev Controller", 00:17:50.155 "serial_number": "00000000000000000000", 00:17:50.155 "firmware_revision": "24.05", 00:17:50.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:50.155 "oacs": { 00:17:50.155 "security": 0, 00:17:50.155 "format": 0, 00:17:50.155 "firmware": 0, 00:17:50.155 "ns_manage": 0 00:17:50.155 }, 00:17:50.155 "multi_ctrlr": true, 00:17:50.155 "ana_reporting": false 00:17:50.155 }, 00:17:50.155 "vs": { 00:17:50.155 "nvme_version": "1.3" 00:17:50.155 }, 00:17:50.155 "ns_data": { 00:17:50.155 "id": 1, 00:17:50.155 "can_share": true 00:17:50.155 } 00:17:50.155 } 00:17:50.155 ], 00:17:50.155 "mp_policy": "active_passive" 00:17:50.155 } 00:17:50.155 } 00:17:50.155 ] 00:17:50.155 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.155 17:21:59 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.155 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.155 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:50.155 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.155 17:21:59 -- host/async_init.sh@53 -- # mktemp 00:17:50.155 17:21:59 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.aCObfsmplt 00:17:50.155 17:21:59 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:50.155 17:21:59 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.aCObfsmplt 00:17:50.155 17:21:59 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:50.155 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.155 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:50.155 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.155 17:21:59 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:17:50.155 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.155 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:50.155 [2024-04-24 17:21:59.341982] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:17:50.155 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.155 17:21:59 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aCObfsmplt 00:17:50.155 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.155 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:50.155 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.155 17:21:59 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aCObfsmplt 00:17:50.155 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.155 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:50.155 [2024-04-24 17:21:59.358008] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:50.414 nvme0n1 00:17:50.414 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.414 17:21:59 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:50.414 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.414 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:50.414 [ 00:17:50.414 { 00:17:50.414 "name": "nvme0n1", 00:17:50.414 "aliases": [ 00:17:50.414 "5622f373-0bc8-4d54-8a33-1e5cafd8cf96" 00:17:50.414 ], 00:17:50.414 "product_name": "NVMe disk", 00:17:50.414 "block_size": 512, 00:17:50.414 "num_blocks": 2097152, 00:17:50.414 "uuid": "5622f373-0bc8-4d54-8a33-1e5cafd8cf96", 00:17:50.414 "assigned_rate_limits": { 00:17:50.414 "rw_ios_per_sec": 0, 00:17:50.414 "rw_mbytes_per_sec": 0, 00:17:50.414 "r_mbytes_per_sec": 0, 00:17:50.414 "w_mbytes_per_sec": 0 00:17:50.414 }, 00:17:50.414 "claimed": false, 00:17:50.414 "zoned": false, 00:17:50.414 "supported_io_types": { 00:17:50.414 "read": true, 00:17:50.414 "write": true, 00:17:50.414 "unmap": false, 00:17:50.414 "write_zeroes": true, 00:17:50.414 "flush": true, 00:17:50.414 "reset": true, 00:17:50.414 "compare": true, 00:17:50.414 "compare_and_write": true, 00:17:50.414 "abort": true, 00:17:50.414 "nvme_admin": true, 00:17:50.414 "nvme_io": true 00:17:50.414 }, 00:17:50.414 "memory_domains": [ 00:17:50.414 { 00:17:50.414 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:50.414 "dma_device_type": 0 00:17:50.414 } 00:17:50.414 ], 00:17:50.414 "driver_specific": { 00:17:50.414 "nvme": [ 00:17:50.414 { 00:17:50.414 "trid": { 00:17:50.414 "trtype": "RDMA", 00:17:50.414 "adrfam": "IPv4", 00:17:50.414 "traddr": "192.168.100.8", 00:17:50.414 "trsvcid": "4421", 00:17:50.414 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:50.414 }, 00:17:50.414 "ctrlr_data": { 00:17:50.414 "cntlid": 3, 00:17:50.414 "vendor_id": "0x8086", 00:17:50.414 "model_number": "SPDK bdev Controller", 00:17:50.414 "serial_number": "00000000000000000000", 00:17:50.414 "firmware_revision": "24.05", 00:17:50.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:50.414 "oacs": { 00:17:50.414 "security": 0, 00:17:50.414 "format": 0, 00:17:50.414 "firmware": 0, 00:17:50.414 "ns_manage": 0 00:17:50.414 }, 00:17:50.414 "multi_ctrlr": true, 00:17:50.414 "ana_reporting": false 00:17:50.414 }, 00:17:50.414 "vs": { 00:17:50.414 "nvme_version": "1.3" 00:17:50.414 }, 00:17:50.414 "ns_data": { 00:17:50.414 "id": 1, 00:17:50.414 "can_share": true 00:17:50.414 } 00:17:50.414 } 00:17:50.414 ], 00:17:50.414 "mp_policy": "active_passive" 00:17:50.414 } 00:17:50.414 } 00:17:50.414 ] 00:17:50.414 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.414 17:21:59 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.414 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.414 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:50.414 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.414 17:21:59 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.aCObfsmplt 00:17:50.414 17:21:59 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:50.414 17:21:59 -- host/async_init.sh@78 -- # nvmftestfini 00:17:50.414 17:21:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:50.414 17:21:59 -- nvmf/common.sh@117 -- # sync 00:17:50.414 17:21:59 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:50.414 17:21:59 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:50.414 17:21:59 -- nvmf/common.sh@120 -- # set +e 00:17:50.414 17:21:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:50.414 17:21:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:50.414 rmmod nvme_rdma 00:17:50.414 rmmod nvme_fabrics 00:17:50.414 17:21:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:50.414 17:21:59 -- nvmf/common.sh@124 -- # set -e 00:17:50.414 17:21:59 -- nvmf/common.sh@125 -- # return 0 00:17:50.414 17:21:59 -- nvmf/common.sh@478 -- # '[' -n 3032780 ']' 00:17:50.414 17:21:59 -- nvmf/common.sh@479 -- # killprocess 3032780 00:17:50.414 17:21:59 -- common/autotest_common.sh@936 -- # '[' -z 3032780 ']' 00:17:50.414 17:21:59 -- common/autotest_common.sh@940 -- # kill -0 3032780 00:17:50.414 17:21:59 -- common/autotest_common.sh@941 -- # uname 00:17:50.414 17:21:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.414 17:21:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3032780 00:17:50.414 17:21:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:50.414 17:21:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:50.414 17:21:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3032780' 00:17:50.414 killing process with pid 3032780 00:17:50.414 17:21:59 -- common/autotest_common.sh@955 -- # kill 3032780 00:17:50.414 17:21:59 -- common/autotest_common.sh@960 -- # wait 3032780 00:17:50.673 17:21:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:50.673 17:21:59 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:50.673 00:17:50.673 real 0m7.375s 00:17:50.673 user 0m3.384s 00:17:50.673 sys 0m4.530s 00:17:50.673 17:21:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:50.673 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:50.673 ************************************ 00:17:50.673 END TEST nvmf_async_init 00:17:50.673 ************************************ 00:17:50.673 17:21:59 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:17:50.673 17:21:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:50.673 17:21:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:50.673 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:17:50.931 ************************************ 00:17:50.931 START TEST dma 00:17:50.931 ************************************ 00:17:50.931 17:21:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:17:50.931 * Looking for test storage... 00:17:50.931 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:50.932 17:22:00 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.932 17:22:00 -- nvmf/common.sh@7 -- # uname -s 00:17:50.932 17:22:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.932 17:22:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.932 17:22:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.932 17:22:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.932 17:22:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.932 17:22:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.932 17:22:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.932 17:22:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.932 17:22:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.932 17:22:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.932 17:22:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:50.932 17:22:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:17:50.932 17:22:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.932 17:22:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.932 17:22:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.932 17:22:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.932 17:22:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:50.932 17:22:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.932 17:22:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.932 17:22:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.932 17:22:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.932 17:22:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.932 17:22:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.932 17:22:00 -- paths/export.sh@5 -- # export PATH 00:17:50.932 17:22:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.932 17:22:00 -- nvmf/common.sh@47 -- # : 0 00:17:50.932 17:22:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:50.932 17:22:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:50.932 17:22:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.932 17:22:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.932 17:22:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.932 17:22:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:50.932 17:22:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:50.932 17:22:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:50.932 17:22:00 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:17:50.932 17:22:00 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:17:50.932 17:22:00 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:17:50.932 17:22:00 -- host/dma.sh@18 -- # subsystem=0 00:17:50.932 17:22:00 -- host/dma.sh@93 -- # nvmftestinit 00:17:50.932 17:22:00 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:50.932 17:22:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.932 17:22:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:50.932 17:22:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:50.932 17:22:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:50.932 17:22:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.932 17:22:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.932 17:22:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.932 17:22:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:50.932 17:22:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:50.932 17:22:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:50.932 17:22:00 -- common/autotest_common.sh@10 -- # set +x 00:17:56.201 17:22:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:56.201 17:22:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:56.201 17:22:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:56.201 17:22:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:56.201 17:22:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:56.201 17:22:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:56.201 17:22:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:56.201 17:22:04 -- nvmf/common.sh@295 -- # net_devs=() 00:17:56.201 17:22:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:56.201 17:22:04 -- nvmf/common.sh@296 -- # e810=() 00:17:56.201 17:22:04 -- nvmf/common.sh@296 -- # local -ga e810 00:17:56.201 17:22:04 -- nvmf/common.sh@297 -- # x722=() 00:17:56.201 17:22:04 -- nvmf/common.sh@297 -- # local -ga x722 00:17:56.201 17:22:04 -- nvmf/common.sh@298 -- # mlx=() 00:17:56.201 17:22:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:56.201 17:22:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.201 17:22:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.201 17:22:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.201 17:22:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.201 17:22:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.201 17:22:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.201 17:22:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.201 17:22:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.201 17:22:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.201 17:22:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.201 17:22:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.201 17:22:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:56.201 17:22:04 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:56.201 17:22:04 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:56.201 17:22:04 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:56.201 17:22:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:56.201 17:22:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.201 17:22:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:17:56.201 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:17:56.201 17:22:04 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:56.201 17:22:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.201 17:22:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:17:56.201 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:17:56.201 17:22:04 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:56.201 17:22:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:56.201 17:22:04 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.201 17:22:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.201 17:22:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:56.201 17:22:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.201 17:22:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:17:56.201 Found net devices under 0000:da:00.0: mlx_0_0 00:17:56.201 17:22:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.201 17:22:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.201 17:22:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.201 17:22:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:56.201 17:22:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.201 17:22:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:17:56.201 Found net devices under 0000:da:00.1: mlx_0_1 00:17:56.201 17:22:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.201 17:22:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:56.201 17:22:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:56.201 17:22:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:56.201 17:22:04 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:56.201 17:22:04 -- nvmf/common.sh@58 -- # uname 00:17:56.201 17:22:04 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:56.201 17:22:04 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:56.201 17:22:04 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:56.201 17:22:04 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:56.201 17:22:04 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:56.201 17:22:04 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:56.201 17:22:04 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:56.201 17:22:04 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:56.201 17:22:04 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:56.201 17:22:04 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:56.201 17:22:04 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:56.201 17:22:04 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:56.201 17:22:04 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:56.201 17:22:04 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:56.201 17:22:04 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:56.201 17:22:04 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:56.201 17:22:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:56.201 17:22:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.201 17:22:04 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:56.201 17:22:04 -- nvmf/common.sh@105 -- # continue 2 00:17:56.201 17:22:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:56.201 17:22:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.201 17:22:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.201 17:22:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:56.201 17:22:04 -- nvmf/common.sh@105 -- # continue 2 00:17:56.201 17:22:04 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:56.201 17:22:04 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:56.201 17:22:04 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:56.201 17:22:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:56.201 17:22:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:56.201 17:22:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:56.201 17:22:04 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:56.201 17:22:04 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:56.201 17:22:04 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:56.201 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:56.201 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:17:56.201 altname enp218s0f0np0 00:17:56.201 altname ens818f0np0 00:17:56.201 inet 192.168.100.8/24 scope global mlx_0_0 00:17:56.201 valid_lft forever preferred_lft forever 00:17:56.201 17:22:04 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:56.201 17:22:04 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:56.201 17:22:04 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:56.201 17:22:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:56.201 17:22:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:56.201 17:22:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:56.201 17:22:04 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:56.201 17:22:04 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:56.202 17:22:04 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:56.202 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:56.202 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:17:56.202 altname enp218s0f1np1 00:17:56.202 altname ens818f1np1 00:17:56.202 inet 192.168.100.9/24 scope global mlx_0_1 00:17:56.202 valid_lft forever preferred_lft forever 00:17:56.202 17:22:04 -- nvmf/common.sh@411 -- # return 0 00:17:56.202 17:22:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:56.202 17:22:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:56.202 17:22:04 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:56.202 17:22:04 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:56.202 17:22:04 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:56.202 17:22:04 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:56.202 17:22:04 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:56.202 17:22:04 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:56.202 17:22:04 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:56.202 17:22:04 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:56.202 17:22:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:56.202 17:22:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.202 17:22:04 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:56.202 17:22:04 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:56.202 17:22:04 -- nvmf/common.sh@105 -- # continue 2 00:17:56.202 17:22:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:56.202 17:22:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.202 17:22:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:56.202 17:22:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.202 17:22:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:56.202 17:22:04 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:56.202 17:22:04 -- nvmf/common.sh@105 -- # continue 2 00:17:56.202 17:22:04 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:56.202 17:22:04 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:56.202 17:22:04 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:56.202 17:22:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:56.202 17:22:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:56.202 17:22:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:56.202 17:22:04 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:56.202 17:22:04 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:56.202 17:22:04 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:56.202 17:22:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:56.202 17:22:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:56.202 17:22:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:56.202 17:22:04 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:56.202 192.168.100.9' 00:17:56.202 17:22:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:56.202 192.168.100.9' 00:17:56.202 17:22:04 -- nvmf/common.sh@446 -- # head -n 1 00:17:56.202 17:22:04 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:56.202 17:22:04 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:56.202 192.168.100.9' 00:17:56.202 17:22:04 -- nvmf/common.sh@447 -- # head -n 1 00:17:56.202 17:22:04 -- nvmf/common.sh@447 -- # tail -n +2 00:17:56.202 17:22:04 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:56.202 17:22:04 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:56.202 17:22:04 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:56.202 17:22:04 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:56.202 17:22:04 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:56.202 17:22:04 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:56.202 17:22:04 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:17:56.202 17:22:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:56.202 17:22:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:56.202 17:22:04 -- common/autotest_common.sh@10 -- # set +x 00:17:56.202 17:22:04 -- nvmf/common.sh@470 -- # nvmfpid=3035022 00:17:56.202 17:22:04 -- nvmf/common.sh@471 -- # waitforlisten 3035022 00:17:56.202 17:22:04 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:56.202 17:22:04 -- common/autotest_common.sh@817 -- # '[' -z 3035022 ']' 00:17:56.202 17:22:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.202 17:22:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:56.202 17:22:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.202 17:22:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:56.202 17:22:04 -- common/autotest_common.sh@10 -- # set +x 00:17:56.202 [2024-04-24 17:22:04.996174] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:17:56.202 [2024-04-24 17:22:04.996217] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.202 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.202 [2024-04-24 17:22:05.050758] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:56.202 [2024-04-24 17:22:05.122533] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.202 [2024-04-24 17:22:05.122570] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.202 [2024-04-24 17:22:05.122577] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.202 [2024-04-24 17:22:05.122583] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.202 [2024-04-24 17:22:05.122588] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.202 [2024-04-24 17:22:05.122636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.202 [2024-04-24 17:22:05.122638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.771 17:22:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:56.771 17:22:05 -- common/autotest_common.sh@850 -- # return 0 00:17:56.771 17:22:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:56.771 17:22:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:56.771 17:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:56.771 17:22:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.771 17:22:05 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:17:56.771 17:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.771 17:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:56.771 [2024-04-24 17:22:05.837987] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc228b0/0xc26da0) succeed. 00:17:56.771 [2024-04-24 17:22:05.846808] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc23db0/0xc68430) succeed. 00:17:56.771 17:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.771 17:22:05 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:17:56.771 17:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.771 17:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:56.771 Malloc0 00:17:56.771 17:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.771 17:22:05 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:56.771 17:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.771 17:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:56.771 17:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.771 17:22:05 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:17:56.771 17:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.771 17:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:56.771 17:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.771 17:22:05 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:56.771 17:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.771 17:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:56.771 [2024-04-24 17:22:05.994802] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:56.771 17:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.771 17:22:05 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:17:56.771 17:22:05 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:17:56.771 17:22:05 -- nvmf/common.sh@521 -- # config=() 00:17:56.771 17:22:06 -- nvmf/common.sh@521 -- # local subsystem config 00:17:56.771 17:22:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:56.771 17:22:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:56.771 { 00:17:56.771 "params": { 00:17:56.771 "name": "Nvme$subsystem", 00:17:56.771 "trtype": "$TEST_TRANSPORT", 00:17:56.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:56.771 "adrfam": "ipv4", 00:17:56.771 "trsvcid": "$NVMF_PORT", 00:17:56.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:56.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:56.771 "hdgst": ${hdgst:-false}, 00:17:56.771 "ddgst": ${ddgst:-false} 00:17:56.771 }, 00:17:56.771 "method": "bdev_nvme_attach_controller" 00:17:56.771 } 00:17:56.771 EOF 00:17:56.771 )") 00:17:56.771 17:22:06 -- nvmf/common.sh@543 -- # cat 00:17:56.771 17:22:06 -- nvmf/common.sh@545 -- # jq . 00:17:56.771 17:22:06 -- nvmf/common.sh@546 -- # IFS=, 00:17:56.771 17:22:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:56.771 "params": { 00:17:56.771 "name": "Nvme0", 00:17:56.771 "trtype": "rdma", 00:17:56.771 "traddr": "192.168.100.8", 00:17:56.771 "adrfam": "ipv4", 00:17:56.771 "trsvcid": "4420", 00:17:56.771 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:56.771 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:56.771 "hdgst": false, 00:17:56.771 "ddgst": false 00:17:56.771 }, 00:17:56.771 "method": "bdev_nvme_attach_controller" 00:17:56.771 }' 00:17:57.030 [2024-04-24 17:22:06.040337] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:17:57.030 [2024-04-24 17:22:06.040379] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035059 ] 00:17:57.030 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.030 [2024-04-24 17:22:06.089384] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:57.030 [2024-04-24 17:22:06.160923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.030 [2024-04-24 17:22:06.160925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.590 bdev Nvme0n1 reports 1 memory domains 00:18:03.590 bdev Nvme0n1 supports RDMA memory domain 00:18:03.590 Initialization complete, running randrw IO for 5 sec on 2 cores 00:18:03.590 ========================================================================== 00:18:03.590 Latency [us] 00:18:03.590 IOPS MiB/s Average min max 00:18:03.590 Core 2: 22136.17 86.47 722.01 248.35 8110.82 00:18:03.590 Core 3: 22223.35 86.81 719.18 238.60 8278.95 00:18:03.590 ========================================================================== 00:18:03.590 Total : 44359.52 173.28 720.59 238.60 8278.95 00:18:03.590 00:18:03.590 Total operations: 221834, translate 221834 pull_push 0 memzero 0 00:18:03.590 17:22:11 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:18:03.590 17:22:11 -- host/dma.sh@107 -- # gen_malloc_json 00:18:03.590 17:22:11 -- host/dma.sh@21 -- # jq . 00:18:03.590 [2024-04-24 17:22:11.628777] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:18:03.590 [2024-04-24 17:22:11.628831] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035138 ] 00:18:03.590 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.590 [2024-04-24 17:22:11.677644] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:03.590 [2024-04-24 17:22:11.747318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:03.590 [2024-04-24 17:22:11.747320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.859 bdev Malloc0 reports 2 memory domains 00:18:08.859 bdev Malloc0 doesn't support RDMA memory domain 00:18:08.859 Initialization complete, running randrw IO for 5 sec on 2 cores 00:18:08.859 ========================================================================== 00:18:08.859 Latency [us] 00:18:08.859 IOPS MiB/s Average min max 00:18:08.859 Core 2: 14808.79 57.85 1079.69 422.55 1394.64 00:18:08.859 Core 3: 14694.63 57.40 1088.09 430.42 1813.79 00:18:08.859 ========================================================================== 00:18:08.859 Total : 29503.42 115.25 1083.88 422.55 1813.79 00:18:08.859 00:18:08.859 Total operations: 147575, translate 0 pull_push 590300 memzero 0 00:18:08.859 17:22:17 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:18:08.859 17:22:17 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:18:08.859 17:22:17 -- host/dma.sh@48 -- # local subsystem=0 00:18:08.859 17:22:17 -- host/dma.sh@50 -- # jq . 00:18:08.859 Ignoring -M option 00:18:08.859 [2024-04-24 17:22:17.119845] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:18:08.859 [2024-04-24 17:22:17.119892] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035207 ] 00:18:08.859 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.859 [2024-04-24 17:22:17.168256] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:08.859 [2024-04-24 17:22:17.238067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:08.859 [2024-04-24 17:22:17.238069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.859 [2024-04-24 17:22:17.445135] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:18:14.130 [2024-04-24 17:22:22.473523] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:18:14.130 bdev 38143e1b-7ba8-4653-9226-dfc6070889ab reports 1 memory domains 00:18:14.131 bdev 38143e1b-7ba8-4653-9226-dfc6070889ab supports RDMA memory domain 00:18:14.131 Initialization complete, running randread IO for 5 sec on 2 cores 00:18:14.131 ========================================================================== 00:18:14.131 Latency [us] 00:18:14.131 IOPS MiB/s Average min max 00:18:14.131 Core 2: 81908.08 319.95 194.65 80.83 3039.72 00:18:14.131 Core 3: 83882.43 327.67 190.04 81.15 2952.40 00:18:14.131 ========================================================================== 00:18:14.131 Total : 165790.51 647.62 192.31 80.83 3039.72 00:18:14.131 00:18:14.131 Total operations: 829057, translate 0 pull_push 0 memzero 829057 00:18:14.131 17:22:22 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:18:14.131 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.131 [2024-04-24 17:22:22.799441] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:16.032 Initializing NVMe Controllers 00:18:16.032 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:18:16.032 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:18:16.032 Initialization complete. Launching workers. 00:18:16.032 ======================================================== 00:18:16.032 Latency(us) 00:18:16.032 Device Information : IOPS MiB/s Average min max 00:18:16.032 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7972.06 6987.55 7995.87 00:18:16.033 ======================================================== 00:18:16.033 Total : 2016.00 7.88 7972.06 6987.55 7995.87 00:18:16.033 00:18:16.033 17:22:25 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:18:16.033 17:22:25 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:18:16.033 17:22:25 -- host/dma.sh@48 -- # local subsystem=0 00:18:16.033 17:22:25 -- host/dma.sh@50 -- # jq . 00:18:16.033 [2024-04-24 17:22:25.124438] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:18:16.033 [2024-04-24 17:22:25.124481] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035316 ] 00:18:16.033 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.033 [2024-04-24 17:22:25.172863] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:16.033 [2024-04-24 17:22:25.243682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.033 [2024-04-24 17:22:25.243684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.291 [2024-04-24 17:22:25.450115] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:18:21.657 [2024-04-24 17:22:30.479829] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:18:21.657 bdev fe6e8899-443e-4557-898c-61de39e18cee reports 1 memory domains 00:18:21.657 bdev fe6e8899-443e-4557-898c-61de39e18cee supports RDMA memory domain 00:18:21.657 Initialization complete, running randrw IO for 5 sec on 2 cores 00:18:21.657 ========================================================================== 00:18:21.657 Latency [us] 00:18:21.657 IOPS MiB/s Average min max 00:18:21.657 Core 2: 19404.00 75.80 823.66 26.70 11199.37 00:18:21.657 Core 3: 19671.50 76.84 812.51 12.12 11338.11 00:18:21.657 ========================================================================== 00:18:21.657 Total : 39075.50 152.64 818.05 12.12 11338.11 00:18:21.657 00:18:21.657 Total operations: 195454, translate 195345 pull_push 0 memzero 109 00:18:21.657 17:22:30 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:18:21.657 17:22:30 -- host/dma.sh@120 -- # nvmftestfini 00:18:21.657 17:22:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:21.657 17:22:30 -- nvmf/common.sh@117 -- # sync 00:18:21.657 17:22:30 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:21.657 17:22:30 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:21.657 17:22:30 -- nvmf/common.sh@120 -- # set +e 00:18:21.657 17:22:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:21.657 17:22:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:21.657 rmmod nvme_rdma 00:18:21.657 rmmod nvme_fabrics 00:18:21.657 17:22:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:21.657 17:22:30 -- nvmf/common.sh@124 -- # set -e 00:18:21.657 17:22:30 -- nvmf/common.sh@125 -- # return 0 00:18:21.657 17:22:30 -- nvmf/common.sh@478 -- # '[' -n 3035022 ']' 00:18:21.657 17:22:30 -- nvmf/common.sh@479 -- # killprocess 3035022 00:18:21.657 17:22:30 -- common/autotest_common.sh@936 -- # '[' -z 3035022 ']' 00:18:21.657 17:22:30 -- common/autotest_common.sh@940 -- # kill -0 3035022 00:18:21.657 17:22:30 -- common/autotest_common.sh@941 -- # uname 00:18:21.657 17:22:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:21.657 17:22:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3035022 00:18:21.657 17:22:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:21.657 17:22:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:21.657 17:22:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3035022' 00:18:21.657 killing process with pid 3035022 00:18:21.657 17:22:30 -- common/autotest_common.sh@955 -- # kill 3035022 00:18:21.657 17:22:30 -- common/autotest_common.sh@960 -- # wait 3035022 00:18:21.916 17:22:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:21.916 17:22:31 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:21.916 00:18:21.916 real 0m31.197s 00:18:21.916 user 1m36.189s 00:18:21.916 sys 0m4.792s 00:18:21.916 17:22:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:21.916 17:22:31 -- common/autotest_common.sh@10 -- # set +x 00:18:21.916 ************************************ 00:18:21.916 END TEST dma 00:18:21.916 ************************************ 00:18:22.175 17:22:31 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:18:22.175 17:22:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:22.175 17:22:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:22.175 17:22:31 -- common/autotest_common.sh@10 -- # set +x 00:18:22.175 ************************************ 00:18:22.175 START TEST nvmf_identify 00:18:22.175 ************************************ 00:18:22.175 17:22:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:18:22.175 * Looking for test storage... 00:18:22.175 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:22.175 17:22:31 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.175 17:22:31 -- nvmf/common.sh@7 -- # uname -s 00:18:22.175 17:22:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.175 17:22:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.175 17:22:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.175 17:22:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.175 17:22:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.175 17:22:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.175 17:22:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.175 17:22:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.175 17:22:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.175 17:22:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.175 17:22:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:22.175 17:22:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:18:22.175 17:22:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.176 17:22:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.176 17:22:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.176 17:22:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.176 17:22:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:22.176 17:22:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.176 17:22:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.176 17:22:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.176 17:22:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.176 17:22:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.176 17:22:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.176 17:22:31 -- paths/export.sh@5 -- # export PATH 00:18:22.176 17:22:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.176 17:22:31 -- nvmf/common.sh@47 -- # : 0 00:18:22.176 17:22:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:22.176 17:22:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:22.176 17:22:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.176 17:22:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.176 17:22:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.176 17:22:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:22.176 17:22:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:22.176 17:22:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:22.176 17:22:31 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:22.176 17:22:31 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:22.176 17:22:31 -- host/identify.sh@14 -- # nvmftestinit 00:18:22.176 17:22:31 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:22.176 17:22:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.176 17:22:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:22.176 17:22:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:22.176 17:22:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:22.176 17:22:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.176 17:22:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.176 17:22:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.176 17:22:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:22.176 17:22:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:22.176 17:22:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:22.176 17:22:31 -- common/autotest_common.sh@10 -- # set +x 00:18:27.440 17:22:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:27.440 17:22:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:27.440 17:22:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:27.440 17:22:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:27.440 17:22:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:27.440 17:22:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:27.440 17:22:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:27.440 17:22:35 -- nvmf/common.sh@295 -- # net_devs=() 00:18:27.440 17:22:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:27.440 17:22:35 -- nvmf/common.sh@296 -- # e810=() 00:18:27.440 17:22:35 -- nvmf/common.sh@296 -- # local -ga e810 00:18:27.440 17:22:35 -- nvmf/common.sh@297 -- # x722=() 00:18:27.440 17:22:35 -- nvmf/common.sh@297 -- # local -ga x722 00:18:27.440 17:22:35 -- nvmf/common.sh@298 -- # mlx=() 00:18:27.441 17:22:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:27.441 17:22:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.441 17:22:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.441 17:22:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.441 17:22:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.441 17:22:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.441 17:22:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.441 17:22:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.441 17:22:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.441 17:22:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.441 17:22:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.441 17:22:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.441 17:22:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:27.441 17:22:35 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:27.441 17:22:35 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:27.441 17:22:35 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:27.441 17:22:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:27.441 17:22:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.441 17:22:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:18:27.441 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:18:27.441 17:22:35 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:27.441 17:22:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.441 17:22:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:18:27.441 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:18:27.441 17:22:35 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:27.441 17:22:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:27.441 17:22:35 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.441 17:22:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.441 17:22:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:27.441 17:22:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.441 17:22:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:18:27.441 Found net devices under 0000:da:00.0: mlx_0_0 00:18:27.441 17:22:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.441 17:22:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.441 17:22:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.441 17:22:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:27.441 17:22:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.441 17:22:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:18:27.441 Found net devices under 0000:da:00.1: mlx_0_1 00:18:27.441 17:22:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.441 17:22:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:27.441 17:22:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:27.441 17:22:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:27.441 17:22:35 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:27.441 17:22:35 -- nvmf/common.sh@58 -- # uname 00:18:27.441 17:22:35 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:27.441 17:22:35 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:27.441 17:22:35 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:27.441 17:22:35 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:27.441 17:22:35 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:27.441 17:22:35 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:27.441 17:22:35 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:27.441 17:22:35 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:27.441 17:22:35 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:27.441 17:22:35 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:27.441 17:22:35 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:27.441 17:22:35 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:27.441 17:22:35 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:27.441 17:22:35 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:27.441 17:22:35 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:27.441 17:22:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:27.441 17:22:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:27.441 17:22:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.441 17:22:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:27.441 17:22:35 -- nvmf/common.sh@105 -- # continue 2 00:18:27.441 17:22:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:27.441 17:22:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.441 17:22:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.441 17:22:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:27.441 17:22:35 -- nvmf/common.sh@105 -- # continue 2 00:18:27.441 17:22:35 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:27.441 17:22:35 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:27.441 17:22:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:27.441 17:22:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:27.441 17:22:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:27.441 17:22:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:27.441 17:22:35 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:27.441 17:22:35 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:27.441 17:22:35 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:27.441 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:27.441 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:18:27.441 altname enp218s0f0np0 00:18:27.441 altname ens818f0np0 00:18:27.441 inet 192.168.100.8/24 scope global mlx_0_0 00:18:27.442 valid_lft forever preferred_lft forever 00:18:27.442 17:22:35 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:27.442 17:22:35 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:27.442 17:22:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:27.442 17:22:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:27.442 17:22:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:27.442 17:22:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:27.442 17:22:35 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:27.442 17:22:35 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:27.442 17:22:35 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:27.442 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:27.442 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:18:27.442 altname enp218s0f1np1 00:18:27.442 altname ens818f1np1 00:18:27.442 inet 192.168.100.9/24 scope global mlx_0_1 00:18:27.442 valid_lft forever preferred_lft forever 00:18:27.442 17:22:35 -- nvmf/common.sh@411 -- # return 0 00:18:27.442 17:22:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:27.442 17:22:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:27.442 17:22:35 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:27.442 17:22:35 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:27.442 17:22:35 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:27.442 17:22:35 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:27.442 17:22:35 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:27.442 17:22:35 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:27.442 17:22:35 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:27.442 17:22:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:27.442 17:22:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:27.442 17:22:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.442 17:22:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:27.442 17:22:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:27.442 17:22:35 -- nvmf/common.sh@105 -- # continue 2 00:18:27.442 17:22:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:27.442 17:22:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.442 17:22:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:27.442 17:22:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:27.442 17:22:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:27.442 17:22:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:27.442 17:22:35 -- nvmf/common.sh@105 -- # continue 2 00:18:27.442 17:22:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:27.442 17:22:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:27.442 17:22:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:27.442 17:22:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:27.442 17:22:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:27.442 17:22:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:27.442 17:22:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:27.442 17:22:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:27.442 17:22:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:27.442 17:22:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:27.442 17:22:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:27.442 17:22:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:27.442 17:22:35 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:27.442 192.168.100.9' 00:18:27.442 17:22:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:27.442 192.168.100.9' 00:18:27.442 17:22:35 -- nvmf/common.sh@446 -- # head -n 1 00:18:27.442 17:22:35 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:27.442 17:22:35 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:27.442 192.168.100.9' 00:18:27.442 17:22:35 -- nvmf/common.sh@447 -- # tail -n +2 00:18:27.442 17:22:35 -- nvmf/common.sh@447 -- # head -n 1 00:18:27.442 17:22:35 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:27.442 17:22:35 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:27.442 17:22:35 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:27.442 17:22:35 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:27.442 17:22:35 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:27.442 17:22:35 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:27.442 17:22:35 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:27.442 17:22:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:27.442 17:22:35 -- common/autotest_common.sh@10 -- # set +x 00:18:27.442 17:22:35 -- host/identify.sh@19 -- # nvmfpid=3037607 00:18:27.442 17:22:35 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:27.442 17:22:35 -- host/identify.sh@23 -- # waitforlisten 3037607 00:18:27.442 17:22:35 -- common/autotest_common.sh@817 -- # '[' -z 3037607 ']' 00:18:27.442 17:22:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.442 17:22:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:27.442 17:22:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.442 17:22:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:27.442 17:22:35 -- common/autotest_common.sh@10 -- # set +x 00:18:27.442 17:22:35 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:27.442 [2024-04-24 17:22:35.972535] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:18:27.442 [2024-04-24 17:22:35.972578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.442 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.442 [2024-04-24 17:22:36.027285] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:27.442 [2024-04-24 17:22:36.105131] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.442 [2024-04-24 17:22:36.105168] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.442 [2024-04-24 17:22:36.105175] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.442 [2024-04-24 17:22:36.105181] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.442 [2024-04-24 17:22:36.105186] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.442 [2024-04-24 17:22:36.105233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.442 [2024-04-24 17:22:36.105249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.442 [2024-04-24 17:22:36.105354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.442 [2024-04-24 17:22:36.105355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.700 17:22:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:27.700 17:22:36 -- common/autotest_common.sh@850 -- # return 0 00:18:27.700 17:22:36 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:27.700 17:22:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.700 17:22:36 -- common/autotest_common.sh@10 -- # set +x 00:18:27.700 [2024-04-24 17:22:36.803376] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6bbf60/0x6c0450) succeed. 00:18:27.700 [2024-04-24 17:22:36.813723] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6bd550/0x701ae0) succeed. 00:18:27.700 17:22:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.700 17:22:36 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:27.700 17:22:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:27.700 17:22:36 -- common/autotest_common.sh@10 -- # set +x 00:18:27.966 17:22:36 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:27.966 17:22:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.966 17:22:36 -- common/autotest_common.sh@10 -- # set +x 00:18:27.966 Malloc0 00:18:27.966 17:22:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.966 17:22:36 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:27.966 17:22:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.966 17:22:36 -- common/autotest_common.sh@10 -- # set +x 00:18:27.966 17:22:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.966 17:22:36 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:27.966 17:22:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.966 17:22:37 -- common/autotest_common.sh@10 -- # set +x 00:18:27.966 17:22:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.966 17:22:37 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:27.966 17:22:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.966 17:22:37 -- common/autotest_common.sh@10 -- # set +x 00:18:27.966 [2024-04-24 17:22:37.020264] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:27.966 17:22:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.966 17:22:37 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:27.966 17:22:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.966 17:22:37 -- common/autotest_common.sh@10 -- # set +x 00:18:27.966 17:22:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.966 17:22:37 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:27.966 17:22:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.966 17:22:37 -- common/autotest_common.sh@10 -- # set +x 00:18:27.966 [2024-04-24 17:22:37.035941] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:18:27.966 [ 00:18:27.966 { 00:18:27.966 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:27.966 "subtype": "Discovery", 00:18:27.966 "listen_addresses": [ 00:18:27.966 { 00:18:27.966 "transport": "RDMA", 00:18:27.966 "trtype": "RDMA", 00:18:27.966 "adrfam": "IPv4", 00:18:27.966 "traddr": "192.168.100.8", 00:18:27.966 "trsvcid": "4420" 00:18:27.966 } 00:18:27.966 ], 00:18:27.966 "allow_any_host": true, 00:18:27.966 "hosts": [] 00:18:27.966 }, 00:18:27.966 { 00:18:27.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.966 "subtype": "NVMe", 00:18:27.966 "listen_addresses": [ 00:18:27.966 { 00:18:27.966 "transport": "RDMA", 00:18:27.966 "trtype": "RDMA", 00:18:27.966 "adrfam": "IPv4", 00:18:27.966 "traddr": "192.168.100.8", 00:18:27.966 "trsvcid": "4420" 00:18:27.966 } 00:18:27.966 ], 00:18:27.966 "allow_any_host": true, 00:18:27.966 "hosts": [], 00:18:27.966 "serial_number": "SPDK00000000000001", 00:18:27.966 "model_number": "SPDK bdev Controller", 00:18:27.966 "max_namespaces": 32, 00:18:27.966 "min_cntlid": 1, 00:18:27.966 "max_cntlid": 65519, 00:18:27.966 "namespaces": [ 00:18:27.966 { 00:18:27.966 "nsid": 1, 00:18:27.966 "bdev_name": "Malloc0", 00:18:27.966 "name": "Malloc0", 00:18:27.966 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:27.966 "eui64": "ABCDEF0123456789", 00:18:27.966 "uuid": "3bee7370-a392-471f-9b90-f1e174a5d1d0" 00:18:27.966 } 00:18:27.966 ] 00:18:27.966 } 00:18:27.966 ] 00:18:27.966 17:22:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.966 17:22:37 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:27.966 [2024-04-24 17:22:37.071734] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:18:27.966 [2024-04-24 17:22:37.071779] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3037641 ] 00:18:27.966 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.966 [2024-04-24 17:22:37.113953] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:18:27.966 [2024-04-24 17:22:37.114026] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:18:27.966 [2024-04-24 17:22:37.114042] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:18:27.966 [2024-04-24 17:22:37.114045] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:18:27.966 [2024-04-24 17:22:37.114074] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:18:27.966 [2024-04-24 17:22:37.127351] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:18:27.966 [2024-04-24 17:22:37.137665] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:27.966 [2024-04-24 17:22:37.137682] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:18:27.966 [2024-04-24 17:22:37.137688] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137693] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137698] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137702] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137706] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137710] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137714] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137718] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137723] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137727] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137731] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137735] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137739] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137744] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137748] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137752] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137756] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137760] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137765] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137771] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137776] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137780] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137784] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137788] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137792] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137796] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137801] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137805] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137809] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137813] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137817] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137821] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:18:27.966 [2024-04-24 17:22:37.137828] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:27.966 [2024-04-24 17:22:37.137832] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:18:27.966 [2024-04-24 17:22:37.137847] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.137858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf200 len:0x400 key:0x182f00 00:18:27.966 [2024-04-24 17:22:37.142831] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.966 [2024-04-24 17:22:37.142839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:27.966 [2024-04-24 17:22:37.142845] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182f00 00:18:27.966 [2024-04-24 17:22:37.142850] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:27.966 [2024-04-24 17:22:37.142856] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:18:27.966 [2024-04-24 17:22:37.142860] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:18:27.966 [2024-04-24 17:22:37.142874] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.142881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.967 [2024-04-24 17:22:37.142903] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.967 [2024-04-24 17:22:37.142908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:18:27.967 [2024-04-24 17:22:37.142914] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:18:27.967 [2024-04-24 17:22:37.142919] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.142923] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:18:27.967 [2024-04-24 17:22:37.142931] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.142937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.967 [2024-04-24 17:22:37.142954] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.967 [2024-04-24 17:22:37.142958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:18:27.967 [2024-04-24 17:22:37.142964] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:18:27.967 [2024-04-24 17:22:37.142968] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.142973] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:18:27.967 [2024-04-24 17:22:37.142978] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.142983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.967 [2024-04-24 17:22:37.142999] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.967 [2024-04-24 17:22:37.143003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:27.967 [2024-04-24 17:22:37.143008] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:27.967 [2024-04-24 17:22:37.143012] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143019] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.967 [2024-04-24 17:22:37.143040] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.967 [2024-04-24 17:22:37.143044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:27.967 [2024-04-24 17:22:37.143049] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:18:27.967 [2024-04-24 17:22:37.143053] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:18:27.967 [2024-04-24 17:22:37.143057] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143061] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:27.967 [2024-04-24 17:22:37.143166] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:18:27.967 [2024-04-24 17:22:37.143170] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:27.967 [2024-04-24 17:22:37.143177] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.967 [2024-04-24 17:22:37.143198] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.967 [2024-04-24 17:22:37.143202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:27.967 [2024-04-24 17:22:37.143207] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:27.967 [2024-04-24 17:22:37.143212] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143219] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.967 [2024-04-24 17:22:37.143244] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.967 [2024-04-24 17:22:37.143248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:27.967 [2024-04-24 17:22:37.143253] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:27.967 [2024-04-24 17:22:37.143256] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:18:27.967 [2024-04-24 17:22:37.143260] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143265] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:18:27.967 [2024-04-24 17:22:37.143272] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:18:27.967 [2024-04-24 17:22:37.143279] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182f00 00:18:27.967 [2024-04-24 17:22:37.143322] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.967 [2024-04-24 17:22:37.143326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:27.967 [2024-04-24 17:22:37.143333] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:18:27.967 [2024-04-24 17:22:37.143337] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:18:27.967 [2024-04-24 17:22:37.143340] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:18:27.967 [2024-04-24 17:22:37.143347] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:18:27.967 [2024-04-24 17:22:37.143351] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:18:27.967 [2024-04-24 17:22:37.143354] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:18:27.967 [2024-04-24 17:22:37.143358] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143364] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:18:27.967 [2024-04-24 17:22:37.143369] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143375] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.967 [2024-04-24 17:22:37.143397] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.967 [2024-04-24 17:22:37.143401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:27.967 [2024-04-24 17:22:37.143408] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0500 length 0x40 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.967 [2024-04-24 17:22:37.143420] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0640 length 0x40 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.967 [2024-04-24 17:22:37.143430] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.967 [2024-04-24 17:22:37.143440] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.967 [2024-04-24 17:22:37.143448] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:18:27.967 [2024-04-24 17:22:37.143452] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143461] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:27.967 [2024-04-24 17:22:37.143466] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.967 [2024-04-24 17:22:37.143490] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.967 [2024-04-24 17:22:37.143494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:18:27.967 [2024-04-24 17:22:37.143499] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:18:27.967 [2024-04-24 17:22:37.143503] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:18:27.967 [2024-04-24 17:22:37.143506] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143513] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:27.967 [2024-04-24 17:22:37.143519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182f00 00:18:27.968 [2024-04-24 17:22:37.143544] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.968 [2024-04-24 17:22:37.143548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:27.968 [2024-04-24 17:22:37.143553] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182f00 00:18:27.968 [2024-04-24 17:22:37.143560] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:18:27.968 [2024-04-24 17:22:37.143577] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:27.968 [2024-04-24 17:22:37.143583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x182f00 00:18:27.968 [2024-04-24 17:22:37.143589] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x182f00 00:18:27.968 [2024-04-24 17:22:37.143595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.968 [2024-04-24 17:22:37.143615] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.968 [2024-04-24 17:22:37.143619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:27.968 [2024-04-24 17:22:37.143629] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b40 length 0x40 lkey 0x182f00 00:18:27.968 [2024-04-24 17:22:37.143634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x182f00 00:18:27.968 [2024-04-24 17:22:37.143638] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x182f00 00:18:27.968 [2024-04-24 17:22:37.143643] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.968 [2024-04-24 17:22:37.143647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:27.968 [2024-04-24 17:22:37.143651] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x182f00 00:18:27.968 [2024-04-24 17:22:37.143668] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.968 [2024-04-24 17:22:37.143672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:27.968 [2024-04-24 17:22:37.143680] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x182f00 00:18:27.968 [2024-04-24 17:22:37.143686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x182f00 00:18:27.968 [2024-04-24 17:22:37.143690] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x182f00 00:18:27.968 [2024-04-24 17:22:37.143714] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.968 [2024-04-24 17:22:37.143718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:27.968 [2024-04-24 17:22:37.143727] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x182f00 00:18:27.968 ===================================================== 00:18:27.968 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:27.968 ===================================================== 00:18:27.968 Controller Capabilities/Features 00:18:27.968 ================================ 00:18:27.968 Vendor ID: 0000 00:18:27.968 Subsystem Vendor ID: 0000 00:18:27.968 Serial Number: .................... 00:18:27.968 Model Number: ........................................ 00:18:27.968 Firmware Version: 24.05 00:18:27.968 Recommended Arb Burst: 0 00:18:27.968 IEEE OUI Identifier: 00 00 00 00:18:27.968 Multi-path I/O 00:18:27.968 May have multiple subsystem ports: No 00:18:27.968 May have multiple controllers: No 00:18:27.968 Associated with SR-IOV VF: No 00:18:27.968 Max Data Transfer Size: 131072 00:18:27.968 Max Number of Namespaces: 0 00:18:27.968 Max Number of I/O Queues: 1024 00:18:27.968 NVMe Specification Version (VS): 1.3 00:18:27.968 NVMe Specification Version (Identify): 1.3 00:18:27.968 Maximum Queue Entries: 128 00:18:27.968 Contiguous Queues Required: Yes 00:18:27.968 Arbitration Mechanisms Supported 00:18:27.968 Weighted Round Robin: Not Supported 00:18:27.968 Vendor Specific: Not Supported 00:18:27.968 Reset Timeout: 15000 ms 00:18:27.968 Doorbell Stride: 4 bytes 00:18:27.968 NVM Subsystem Reset: Not Supported 00:18:27.968 Command Sets Supported 00:18:27.968 NVM Command Set: Supported 00:18:27.968 Boot Partition: Not Supported 00:18:27.968 Memory Page Size Minimum: 4096 bytes 00:18:27.968 Memory Page Size Maximum: 4096 bytes 00:18:27.968 Persistent Memory Region: Not Supported 00:18:27.968 Optional Asynchronous Events Supported 00:18:27.968 Namespace Attribute Notices: Not Supported 00:18:27.968 Firmware Activation Notices: Not Supported 00:18:27.968 ANA Change Notices: Not Supported 00:18:27.968 PLE Aggregate Log Change Notices: Not Supported 00:18:27.968 LBA Status Info Alert Notices: Not Supported 00:18:27.968 EGE Aggregate Log Change Notices: Not Supported 00:18:27.968 Normal NVM Subsystem Shutdown event: Not Supported 00:18:27.968 Zone Descriptor Change Notices: Not Supported 00:18:27.968 Discovery Log Change Notices: Supported 00:18:27.968 Controller Attributes 00:18:27.968 128-bit Host Identifier: Not Supported 00:18:27.968 Non-Operational Permissive Mode: Not Supported 00:18:27.968 NVM Sets: Not Supported 00:18:27.968 Read Recovery Levels: Not Supported 00:18:27.968 Endurance Groups: Not Supported 00:18:27.968 Predictable Latency Mode: Not Supported 00:18:27.968 Traffic Based Keep ALive: Not Supported 00:18:27.968 Namespace Granularity: Not Supported 00:18:27.968 SQ Associations: Not Supported 00:18:27.968 UUID List: Not Supported 00:18:27.968 Multi-Domain Subsystem: Not Supported 00:18:27.968 Fixed Capacity Management: Not Supported 00:18:27.968 Variable Capacity Management: Not Supported 00:18:27.968 Delete Endurance Group: Not Supported 00:18:27.968 Delete NVM Set: Not Supported 00:18:27.968 Extended LBA Formats Supported: Not Supported 00:18:27.968 Flexible Data Placement Supported: Not Supported 00:18:27.968 00:18:27.968 Controller Memory Buffer Support 00:18:27.968 ================================ 00:18:27.968 Supported: No 00:18:27.968 00:18:27.968 Persistent Memory Region Support 00:18:27.968 ================================ 00:18:27.968 Supported: No 00:18:27.968 00:18:27.968 Admin Command Set Attributes 00:18:27.968 ============================ 00:18:27.968 Security Send/Receive: Not Supported 00:18:27.968 Format NVM: Not Supported 00:18:27.968 Firmware Activate/Download: Not Supported 00:18:27.968 Namespace Management: Not Supported 00:18:27.968 Device Self-Test: Not Supported 00:18:27.968 Directives: Not Supported 00:18:27.968 NVMe-MI: Not Supported 00:18:27.968 Virtualization Management: Not Supported 00:18:27.968 Doorbell Buffer Config: Not Supported 00:18:27.968 Get LBA Status Capability: Not Supported 00:18:27.968 Command & Feature Lockdown Capability: Not Supported 00:18:27.968 Abort Command Limit: 1 00:18:27.968 Async Event Request Limit: 4 00:18:27.968 Number of Firmware Slots: N/A 00:18:27.968 Firmware Slot 1 Read-Only: N/A 00:18:27.968 Firmware Activation Without Reset: N/A 00:18:27.968 Multiple Update Detection Support: N/A 00:18:27.968 Firmware Update Granularity: No Information Provided 00:18:27.968 Per-Namespace SMART Log: No 00:18:27.968 Asymmetric Namespace Access Log Page: Not Supported 00:18:27.968 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:27.968 Command Effects Log Page: Not Supported 00:18:27.968 Get Log Page Extended Data: Supported 00:18:27.968 Telemetry Log Pages: Not Supported 00:18:27.968 Persistent Event Log Pages: Not Supported 00:18:27.968 Supported Log Pages Log Page: May Support 00:18:27.968 Commands Supported & Effects Log Page: Not Supported 00:18:27.968 Feature Identifiers & Effects Log Page:May Support 00:18:27.968 NVMe-MI Commands & Effects Log Page: May Support 00:18:27.968 Data Area 4 for Telemetry Log: Not Supported 00:18:27.968 Error Log Page Entries Supported: 128 00:18:27.968 Keep Alive: Not Supported 00:18:27.968 00:18:27.968 NVM Command Set Attributes 00:18:27.968 ========================== 00:18:27.968 Submission Queue Entry Size 00:18:27.968 Max: 1 00:18:27.968 Min: 1 00:18:27.968 Completion Queue Entry Size 00:18:27.968 Max: 1 00:18:27.968 Min: 1 00:18:27.968 Number of Namespaces: 0 00:18:27.968 Compare Command: Not Supported 00:18:27.968 Write Uncorrectable Command: Not Supported 00:18:27.968 Dataset Management Command: Not Supported 00:18:27.968 Write Zeroes Command: Not Supported 00:18:27.968 Set Features Save Field: Not Supported 00:18:27.968 Reservations: Not Supported 00:18:27.968 Timestamp: Not Supported 00:18:27.968 Copy: Not Supported 00:18:27.968 Volatile Write Cache: Not Present 00:18:27.968 Atomic Write Unit (Normal): 1 00:18:27.968 Atomic Write Unit (PFail): 1 00:18:27.968 Atomic Compare & Write Unit: 1 00:18:27.968 Fused Compare & Write: Supported 00:18:27.968 Scatter-Gather List 00:18:27.968 SGL Command Set: Supported 00:18:27.968 SGL Keyed: Supported 00:18:27.968 SGL Bit Bucket Descriptor: Not Supported 00:18:27.968 SGL Metadata Pointer: Not Supported 00:18:27.968 Oversized SGL: Not Supported 00:18:27.968 SGL Metadata Address: Not Supported 00:18:27.968 SGL Offset: Supported 00:18:27.969 Transport SGL Data Block: Not Supported 00:18:27.969 Replay Protected Memory Block: Not Supported 00:18:27.969 00:18:27.969 Firmware Slot Information 00:18:27.969 ========================= 00:18:27.969 Active slot: 0 00:18:27.969 00:18:27.969 00:18:27.969 Error Log 00:18:27.969 ========= 00:18:27.969 00:18:27.969 Active Namespaces 00:18:27.969 ================= 00:18:27.969 Discovery Log Page 00:18:27.969 ================== 00:18:27.969 Generation Counter: 2 00:18:27.969 Number of Records: 2 00:18:27.969 Record Format: 0 00:18:27.969 00:18:27.969 Discovery Log Entry 0 00:18:27.969 ---------------------- 00:18:27.969 Transport Type: 1 (RDMA) 00:18:27.969 Address Family: 1 (IPv4) 00:18:27.969 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:27.969 Entry Flags: 00:18:27.969 Duplicate Returned Information: 1 00:18:27.969 Explicit Persistent Connection Support for Discovery: 1 00:18:27.969 Transport Requirements: 00:18:27.969 Secure Channel: Not Required 00:18:27.969 Port ID: 0 (0x0000) 00:18:27.969 Controller ID: 65535 (0xffff) 00:18:27.969 Admin Max SQ Size: 128 00:18:27.969 Transport Service Identifier: 4420 00:18:27.969 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:27.969 Transport Address: 192.168.100.8 00:18:27.969 Transport Specific Address Subtype - RDMA 00:18:27.969 RDMA QP Service Type: 1 (Reliable Connected) 00:18:27.969 RDMA Provider Type: 1 (No provider specified) 00:18:27.969 RDMA CM Service: 1 (RDMA_CM) 00:18:27.969 Discovery Log Entry 1 00:18:27.969 ---------------------- 00:18:27.969 Transport Type: 1 (RDMA) 00:18:27.969 Address Family: 1 (IPv4) 00:18:27.969 Subsystem Type: 2 (NVM Subsystem) 00:18:27.969 Entry Flags: 00:18:27.969 Duplicate Returned Information: 0 00:18:27.969 Explicit Persistent Connection Support for Discovery: 0 00:18:27.969 Transport Requirements: 00:18:27.969 Secure Channel: Not Required 00:18:27.969 Port ID: 0 (0x0000) 00:18:27.969 Controller ID: 65535 (0xffff) 00:18:27.969 Admin Max SQ Size: [2024-04-24 17:22:37.143803] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:18:27.969 [2024-04-24 17:22:37.143810] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21469 doesn't match qid 00:18:27.969 [2024-04-24 17:22:37.143823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:5 sqhd:3790 p:0 m:0 dnr:0 00:18:27.969 [2024-04-24 17:22:37.143832] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21469 doesn't match qid 00:18:27.969 [2024-04-24 17:22:37.143838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:5 sqhd:3790 p:0 m:0 dnr:0 00:18:27.969 [2024-04-24 17:22:37.143842] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21469 doesn't match qid 00:18:27.969 [2024-04-24 17:22:37.143848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:5 sqhd:3790 p:0 m:0 dnr:0 00:18:27.969 [2024-04-24 17:22:37.143852] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21469 doesn't match qid 00:18:27.969 [2024-04-24 17:22:37.143858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:5 sqhd:3790 p:0 m:0 dnr:0 00:18:27.969 [2024-04-24 17:22:37.143865] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.143870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.969 [2024-04-24 17:22:37.143887] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.969 [2024-04-24 17:22:37.143893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:18:27.969 [2024-04-24 17:22:37.143901] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.143907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.969 [2024-04-24 17:22:37.143911] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.143929] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.969 [2024-04-24 17:22:37.143934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:27.969 [2024-04-24 17:22:37.143938] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:18:27.969 [2024-04-24 17:22:37.143942] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:18:27.969 [2024-04-24 17:22:37.143946] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.143952] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.143958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.969 [2024-04-24 17:22:37.143977] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.969 [2024-04-24 17:22:37.143981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:27.969 [2024-04-24 17:22:37.143986] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.143992] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.143998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.969 [2024-04-24 17:22:37.144018] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.969 [2024-04-24 17:22:37.144022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:27.969 [2024-04-24 17:22:37.144027] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.144033] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.144039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.969 [2024-04-24 17:22:37.144059] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.969 [2024-04-24 17:22:37.144065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:27.969 [2024-04-24 17:22:37.144070] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.144077] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.144084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.969 [2024-04-24 17:22:37.144098] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.969 [2024-04-24 17:22:37.144104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:27.969 [2024-04-24 17:22:37.144110] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.144117] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.144125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.969 [2024-04-24 17:22:37.144141] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.969 [2024-04-24 17:22:37.144145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:27.969 [2024-04-24 17:22:37.144149] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.144156] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.144162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.969 [2024-04-24 17:22:37.144183] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.969 [2024-04-24 17:22:37.144189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:27.969 [2024-04-24 17:22:37.144193] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.144200] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.144207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.969 [2024-04-24 17:22:37.144230] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.969 [2024-04-24 17:22:37.144235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:27.969 [2024-04-24 17:22:37.144240] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.144246] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.144253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.969 [2024-04-24 17:22:37.144272] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.969 [2024-04-24 17:22:37.144277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:27.969 [2024-04-24 17:22:37.144281] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.144288] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.969 [2024-04-24 17:22:37.144295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.970 [2024-04-24 17:22:37.144314] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.970 [2024-04-24 17:22:37.144318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:27.970 [2024-04-24 17:22:37.144322] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144329] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.970 [2024-04-24 17:22:37.144355] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.970 [2024-04-24 17:22:37.144359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:27.970 [2024-04-24 17:22:37.144364] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144372] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.970 [2024-04-24 17:22:37.144395] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.970 [2024-04-24 17:22:37.144399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:27.970 [2024-04-24 17:22:37.144403] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144411] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.970 [2024-04-24 17:22:37.144434] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.970 [2024-04-24 17:22:37.144438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:27.970 [2024-04-24 17:22:37.144442] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144449] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.970 [2024-04-24 17:22:37.144476] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.970 [2024-04-24 17:22:37.144480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:27.970 [2024-04-24 17:22:37.144484] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144491] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.970 [2024-04-24 17:22:37.144519] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.970 [2024-04-24 17:22:37.144523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:27.970 [2024-04-24 17:22:37.144527] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144534] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.970 [2024-04-24 17:22:37.144555] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.970 [2024-04-24 17:22:37.144559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:27.970 [2024-04-24 17:22:37.144564] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144570] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.970 [2024-04-24 17:22:37.144592] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.970 [2024-04-24 17:22:37.144596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:27.970 [2024-04-24 17:22:37.144600] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144609] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.970 [2024-04-24 17:22:37.144633] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.970 [2024-04-24 17:22:37.144637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:27.970 [2024-04-24 17:22:37.144643] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144649] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.970 [2024-04-24 17:22:37.144674] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.970 [2024-04-24 17:22:37.144678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:27.970 [2024-04-24 17:22:37.144682] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144689] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.970 [2024-04-24 17:22:37.144713] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.970 [2024-04-24 17:22:37.144717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:27.970 [2024-04-24 17:22:37.144722] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144728] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.970 [2024-04-24 17:22:37.144750] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.970 [2024-04-24 17:22:37.144754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:27.970 [2024-04-24 17:22:37.144758] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144765] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.970 [2024-04-24 17:22:37.144792] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.970 [2024-04-24 17:22:37.144797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:27.970 [2024-04-24 17:22:37.144801] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144808] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.970 [2024-04-24 17:22:37.144813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.970 [2024-04-24 17:22:37.144832] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.970 [2024-04-24 17:22:37.144836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.144842] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.144849] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.144854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.144869] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.144873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.144877] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.144884] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.144890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.144906] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.144910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.144914] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.144921] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.144927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.144949] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.144953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.144957] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.144964] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.144969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.144986] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.144990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.144994] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145001] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.145027] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.145031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.145035] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145042] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.145063] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.145067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.145072] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145079] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.145103] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.145108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.145112] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145118] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.145140] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.145144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.145149] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145155] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.145179] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.145183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.145187] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145194] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.145216] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.145220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.145224] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145231] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.145251] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.145255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.145259] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145266] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.145286] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.145293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.145297] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145304] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.145323] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.145327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.145331] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145338] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.145366] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.145370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.145374] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145381] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.145404] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.145408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.145412] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145419] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.145441] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.145445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.145449] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145455] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.971 [2024-04-24 17:22:37.145478] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.971 [2024-04-24 17:22:37.145482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:27.971 [2024-04-24 17:22:37.145486] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x182f00 00:18:27.971 [2024-04-24 17:22:37.145493] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.145519] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.145524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.145528] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145535] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.145558] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.145562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.145567] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145573] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.145600] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.145604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.145608] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145615] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.145641] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.145645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.145649] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145656] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.145679] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.145683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.145687] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145694] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.145723] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.145727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.145731] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145738] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.145761] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.145765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.145769] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145776] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.145797] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.145802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.145806] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145812] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.145840] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.145844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.145848] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145855] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.145874] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.145878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.145882] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145889] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.145909] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.145913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.145918] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145924] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.145949] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.145953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.145957] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145964] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.145970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.145991] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.145995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.146000] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.146006] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.146012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.146034] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.146038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.146042] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.146049] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.146055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.146070] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.146075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.146079] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.146085] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.146091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.146109] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.146113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.146117] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.146124] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.146129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.146144] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.146148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:18:27.972 [2024-04-24 17:22:37.146152] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.146159] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.972 [2024-04-24 17:22:37.146164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.972 [2024-04-24 17:22:37.146183] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.972 [2024-04-24 17:22:37.146187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146192] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146198] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146221] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146229] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146236] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146258] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146266] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146273] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146299] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146307] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146314] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146334] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146342] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146349] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146369] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146378] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146384] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146409] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146417] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146424] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146448] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146457] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146463] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146485] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146493] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146500] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146527] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146536] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146542] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146565] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146574] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146580] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146607] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146615] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146622] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146648] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146656] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146664] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146688] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146696] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146703] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146729] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146737] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146744] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146767] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146775] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146782] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.146807] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.146811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.146816] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.146823] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:27.973 [2024-04-24 17:22:37.150833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:27.973 [2024-04-24 17:22:37.150849] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:27.973 [2024-04-24 17:22:37.150853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:001b p:0 m:0 dnr:0 00:18:27.973 [2024-04-24 17:22:37.150857] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x182f00 00:18:27.974 [2024-04-24 17:22:37.150862] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:18:27.974 128 00:18:27.974 Transport Service Identifier: 4420 00:18:27.974 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:27.974 Transport Address: 192.168.100.8 00:18:27.974 Transport Specific Address Subtype - RDMA 00:18:27.974 RDMA QP Service Type: 1 (Reliable Connected) 00:18:27.974 RDMA Provider Type: 1 (No provider specified) 00:18:27.974 RDMA CM Service: 1 (RDMA_CM) 00:18:27.974 17:22:37 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:28.235 [2024-04-24 17:22:37.216871] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:18:28.235 [2024-04-24 17:22:37.216916] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3037646 ] 00:18:28.236 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.236 [2024-04-24 17:22:37.256966] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:18:28.236 [2024-04-24 17:22:37.257032] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:18:28.236 [2024-04-24 17:22:37.257045] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:18:28.236 [2024-04-24 17:22:37.257048] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:18:28.236 [2024-04-24 17:22:37.257069] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:18:28.236 [2024-04-24 17:22:37.267346] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:18:28.236 [2024-04-24 17:22:37.277599] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:28.236 [2024-04-24 17:22:37.277608] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:18:28.236 [2024-04-24 17:22:37.277613] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277618] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277622] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277627] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277631] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277635] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277640] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277644] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277648] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277652] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277657] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277661] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277665] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277669] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277673] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277678] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277682] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277688] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277693] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277697] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277701] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277706] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277710] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277714] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277718] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277722] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277726] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277731] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277735] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277739] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277743] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277747] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:18:28.236 [2024-04-24 17:22:37.277751] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:28.236 [2024-04-24 17:22:37.277754] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:18:28.236 [2024-04-24 17:22:37.277766] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.277776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf200 len:0x400 key:0x182f00 00:18:28.236 [2024-04-24 17:22:37.282829] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.236 [2024-04-24 17:22:37.282837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:28.236 [2024-04-24 17:22:37.282842] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.282849] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:28.236 [2024-04-24 17:22:37.282854] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:18:28.236 [2024-04-24 17:22:37.282859] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:18:28.236 [2024-04-24 17:22:37.282869] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.282875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.236 [2024-04-24 17:22:37.282894] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.236 [2024-04-24 17:22:37.282899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:18:28.236 [2024-04-24 17:22:37.282906] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:18:28.236 [2024-04-24 17:22:37.282910] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.282917] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:18:28.236 [2024-04-24 17:22:37.282923] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.282929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.236 [2024-04-24 17:22:37.282947] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.236 [2024-04-24 17:22:37.282951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:18:28.236 [2024-04-24 17:22:37.282956] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:18:28.236 [2024-04-24 17:22:37.282959] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.282965] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:18:28.236 [2024-04-24 17:22:37.282970] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.282976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.236 [2024-04-24 17:22:37.282994] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.236 [2024-04-24 17:22:37.282998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:28.236 [2024-04-24 17:22:37.283003] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:28.236 [2024-04-24 17:22:37.283007] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.283013] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.283019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.236 [2024-04-24 17:22:37.283035] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.236 [2024-04-24 17:22:37.283040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:28.236 [2024-04-24 17:22:37.283044] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:18:28.236 [2024-04-24 17:22:37.283048] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:18:28.236 [2024-04-24 17:22:37.283052] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.283057] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:28.236 [2024-04-24 17:22:37.283161] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:18:28.236 [2024-04-24 17:22:37.283165] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:28.236 [2024-04-24 17:22:37.283172] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.283178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.236 [2024-04-24 17:22:37.283194] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.236 [2024-04-24 17:22:37.283198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:28.236 [2024-04-24 17:22:37.283204] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:28.236 [2024-04-24 17:22:37.283208] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182f00 00:18:28.236 [2024-04-24 17:22:37.283214] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.237 [2024-04-24 17:22:37.283237] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.237 [2024-04-24 17:22:37.283241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:28.237 [2024-04-24 17:22:37.283245] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:28.237 [2024-04-24 17:22:37.283249] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283253] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283258] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:18:28.237 [2024-04-24 17:22:37.283264] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283271] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182f00 00:18:28.237 [2024-04-24 17:22:37.283316] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.237 [2024-04-24 17:22:37.283320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:28.237 [2024-04-24 17:22:37.283327] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:18:28.237 [2024-04-24 17:22:37.283331] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:18:28.237 [2024-04-24 17:22:37.283334] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:18:28.237 [2024-04-24 17:22:37.283339] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:18:28.237 [2024-04-24 17:22:37.283343] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:18:28.237 [2024-04-24 17:22:37.283347] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283351] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283356] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283362] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.237 [2024-04-24 17:22:37.283387] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.237 [2024-04-24 17:22:37.283391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:28.237 [2024-04-24 17:22:37.283397] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0500 length 0x40 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.237 [2024-04-24 17:22:37.283409] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0640 length 0x40 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.237 [2024-04-24 17:22:37.283419] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.237 [2024-04-24 17:22:37.283429] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.237 [2024-04-24 17:22:37.283438] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283442] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283450] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283455] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.237 [2024-04-24 17:22:37.283479] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.237 [2024-04-24 17:22:37.283483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:18:28.237 [2024-04-24 17:22:37.283488] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:18:28.237 [2024-04-24 17:22:37.283492] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283495] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283501] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283506] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283511] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.237 [2024-04-24 17:22:37.283538] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.237 [2024-04-24 17:22:37.283542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:18:28.237 [2024-04-24 17:22:37.283580] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283585] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283590] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283597] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182f00 00:18:28.237 [2024-04-24 17:22:37.283628] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.237 [2024-04-24 17:22:37.283632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:28.237 [2024-04-24 17:22:37.283639] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:18:28.237 [2024-04-24 17:22:37.283649] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283653] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283659] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283665] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182f00 00:18:28.237 [2024-04-24 17:22:37.283704] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.237 [2024-04-24 17:22:37.283708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:28.237 [2024-04-24 17:22:37.283718] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283722] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283728] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283734] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182f00 00:18:28.237 [2024-04-24 17:22:37.283766] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.237 [2024-04-24 17:22:37.283770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:28.237 [2024-04-24 17:22:37.283776] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283780] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283785] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283791] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283796] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283801] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283805] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:18:28.237 [2024-04-24 17:22:37.283808] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:18:28.237 [2024-04-24 17:22:37.283814] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:18:28.237 [2024-04-24 17:22:37.283833] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.237 [2024-04-24 17:22:37.283845] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x182f00 00:18:28.237 [2024-04-24 17:22:37.283850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.237 [2024-04-24 17:22:37.283858] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.238 [2024-04-24 17:22:37.283862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:28.238 [2024-04-24 17:22:37.283867] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.283873] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.283879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.238 [2024-04-24 17:22:37.283884] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.238 [2024-04-24 17:22:37.283889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:28.238 [2024-04-24 17:22:37.283893] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.283899] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.238 [2024-04-24 17:22:37.283903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:28.238 [2024-04-24 17:22:37.283908] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.283914] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.283920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.238 [2024-04-24 17:22:37.283938] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.238 [2024-04-24 17:22:37.283942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:28.238 [2024-04-24 17:22:37.283946] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.283952] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.283958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.238 [2024-04-24 17:22:37.283976] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.238 [2024-04-24 17:22:37.283981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:18:28.238 [2024-04-24 17:22:37.283985] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.283993] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.283999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x182f00 00:18:28.238 [2024-04-24 17:22:37.284007] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.284013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x182f00 00:18:28.238 [2024-04-24 17:22:37.284019] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b40 length 0x40 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.284025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x182f00 00:18:28.238 [2024-04-24 17:22:37.284031] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c80 length 0x40 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.284037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x182f00 00:18:28.238 [2024-04-24 17:22:37.284043] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.238 [2024-04-24 17:22:37.284048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:28.238 [2024-04-24 17:22:37.284058] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.284067] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.238 [2024-04-24 17:22:37.284071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:28.238 [2024-04-24 17:22:37.284078] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.284082] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.238 [2024-04-24 17:22:37.284086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:28.238 [2024-04-24 17:22:37.284091] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x182f00 00:18:28.238 [2024-04-24 17:22:37.284095] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.238 [2024-04-24 17:22:37.284099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:28.238 [2024-04-24 17:22:37.284106] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x182f00 00:18:28.238 ===================================================== 00:18:28.238 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:28.238 ===================================================== 00:18:28.238 Controller Capabilities/Features 00:18:28.238 ================================ 00:18:28.238 Vendor ID: 8086 00:18:28.238 Subsystem Vendor ID: 8086 00:18:28.238 Serial Number: SPDK00000000000001 00:18:28.238 Model Number: SPDK bdev Controller 00:18:28.238 Firmware Version: 24.05 00:18:28.238 Recommended Arb Burst: 6 00:18:28.238 IEEE OUI Identifier: e4 d2 5c 00:18:28.238 Multi-path I/O 00:18:28.238 May have multiple subsystem ports: Yes 00:18:28.238 May have multiple controllers: Yes 00:18:28.238 Associated with SR-IOV VF: No 00:18:28.238 Max Data Transfer Size: 131072 00:18:28.238 Max Number of Namespaces: 32 00:18:28.238 Max Number of I/O Queues: 127 00:18:28.238 NVMe Specification Version (VS): 1.3 00:18:28.238 NVMe Specification Version (Identify): 1.3 00:18:28.238 Maximum Queue Entries: 128 00:18:28.238 Contiguous Queues Required: Yes 00:18:28.238 Arbitration Mechanisms Supported 00:18:28.238 Weighted Round Robin: Not Supported 00:18:28.238 Vendor Specific: Not Supported 00:18:28.238 Reset Timeout: 15000 ms 00:18:28.238 Doorbell Stride: 4 bytes 00:18:28.238 NVM Subsystem Reset: Not Supported 00:18:28.238 Command Sets Supported 00:18:28.238 NVM Command Set: Supported 00:18:28.238 Boot Partition: Not Supported 00:18:28.238 Memory Page Size Minimum: 4096 bytes 00:18:28.238 Memory Page Size Maximum: 4096 bytes 00:18:28.238 Persistent Memory Region: Not Supported 00:18:28.238 Optional Asynchronous Events Supported 00:18:28.238 Namespace Attribute Notices: Supported 00:18:28.238 Firmware Activation Notices: Not Supported 00:18:28.238 ANA Change Notices: Not Supported 00:18:28.238 PLE Aggregate Log Change Notices: Not Supported 00:18:28.238 LBA Status Info Alert Notices: Not Supported 00:18:28.238 EGE Aggregate Log Change Notices: Not Supported 00:18:28.238 Normal NVM Subsystem Shutdown event: Not Supported 00:18:28.238 Zone Descriptor Change Notices: Not Supported 00:18:28.238 Discovery Log Change Notices: Not Supported 00:18:28.238 Controller Attributes 00:18:28.238 128-bit Host Identifier: Supported 00:18:28.238 Non-Operational Permissive Mode: Not Supported 00:18:28.238 NVM Sets: Not Supported 00:18:28.238 Read Recovery Levels: Not Supported 00:18:28.238 Endurance Groups: Not Supported 00:18:28.238 Predictable Latency Mode: Not Supported 00:18:28.238 Traffic Based Keep ALive: Not Supported 00:18:28.238 Namespace Granularity: Not Supported 00:18:28.238 SQ Associations: Not Supported 00:18:28.238 UUID List: Not Supported 00:18:28.238 Multi-Domain Subsystem: Not Supported 00:18:28.238 Fixed Capacity Management: Not Supported 00:18:28.238 Variable Capacity Management: Not Supported 00:18:28.238 Delete Endurance Group: Not Supported 00:18:28.238 Delete NVM Set: Not Supported 00:18:28.238 Extended LBA Formats Supported: Not Supported 00:18:28.238 Flexible Data Placement Supported: Not Supported 00:18:28.238 00:18:28.238 Controller Memory Buffer Support 00:18:28.238 ================================ 00:18:28.238 Supported: No 00:18:28.238 00:18:28.238 Persistent Memory Region Support 00:18:28.238 ================================ 00:18:28.238 Supported: No 00:18:28.238 00:18:28.238 Admin Command Set Attributes 00:18:28.238 ============================ 00:18:28.238 Security Send/Receive: Not Supported 00:18:28.238 Format NVM: Not Supported 00:18:28.238 Firmware Activate/Download: Not Supported 00:18:28.238 Namespace Management: Not Supported 00:18:28.238 Device Self-Test: Not Supported 00:18:28.238 Directives: Not Supported 00:18:28.238 NVMe-MI: Not Supported 00:18:28.238 Virtualization Management: Not Supported 00:18:28.238 Doorbell Buffer Config: Not Supported 00:18:28.238 Get LBA Status Capability: Not Supported 00:18:28.238 Command & Feature Lockdown Capability: Not Supported 00:18:28.238 Abort Command Limit: 4 00:18:28.238 Async Event Request Limit: 4 00:18:28.238 Number of Firmware Slots: N/A 00:18:28.238 Firmware Slot 1 Read-Only: N/A 00:18:28.238 Firmware Activation Without Reset: N/A 00:18:28.238 Multiple Update Detection Support: N/A 00:18:28.238 Firmware Update Granularity: No Information Provided 00:18:28.238 Per-Namespace SMART Log: No 00:18:28.238 Asymmetric Namespace Access Log Page: Not Supported 00:18:28.238 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:28.238 Command Effects Log Page: Supported 00:18:28.238 Get Log Page Extended Data: Supported 00:18:28.238 Telemetry Log Pages: Not Supported 00:18:28.239 Persistent Event Log Pages: Not Supported 00:18:28.239 Supported Log Pages Log Page: May Support 00:18:28.239 Commands Supported & Effects Log Page: Not Supported 00:18:28.239 Feature Identifiers & Effects Log Page:May Support 00:18:28.239 NVMe-MI Commands & Effects Log Page: May Support 00:18:28.239 Data Area 4 for Telemetry Log: Not Supported 00:18:28.239 Error Log Page Entries Supported: 128 00:18:28.239 Keep Alive: Supported 00:18:28.239 Keep Alive Granularity: 10000 ms 00:18:28.239 00:18:28.239 NVM Command Set Attributes 00:18:28.239 ========================== 00:18:28.239 Submission Queue Entry Size 00:18:28.239 Max: 64 00:18:28.239 Min: 64 00:18:28.239 Completion Queue Entry Size 00:18:28.239 Max: 16 00:18:28.239 Min: 16 00:18:28.239 Number of Namespaces: 32 00:18:28.239 Compare Command: Supported 00:18:28.239 Write Uncorrectable Command: Not Supported 00:18:28.239 Dataset Management Command: Supported 00:18:28.239 Write Zeroes Command: Supported 00:18:28.239 Set Features Save Field: Not Supported 00:18:28.239 Reservations: Supported 00:18:28.239 Timestamp: Not Supported 00:18:28.239 Copy: Supported 00:18:28.239 Volatile Write Cache: Present 00:18:28.239 Atomic Write Unit (Normal): 1 00:18:28.239 Atomic Write Unit (PFail): 1 00:18:28.239 Atomic Compare & Write Unit: 1 00:18:28.239 Fused Compare & Write: Supported 00:18:28.239 Scatter-Gather List 00:18:28.239 SGL Command Set: Supported 00:18:28.239 SGL Keyed: Supported 00:18:28.239 SGL Bit Bucket Descriptor: Not Supported 00:18:28.239 SGL Metadata Pointer: Not Supported 00:18:28.239 Oversized SGL: Not Supported 00:18:28.239 SGL Metadata Address: Not Supported 00:18:28.239 SGL Offset: Supported 00:18:28.239 Transport SGL Data Block: Not Supported 00:18:28.239 Replay Protected Memory Block: Not Supported 00:18:28.239 00:18:28.239 Firmware Slot Information 00:18:28.239 ========================= 00:18:28.239 Active slot: 1 00:18:28.239 Slot 1 Firmware Revision: 24.05 00:18:28.239 00:18:28.239 00:18:28.239 Commands Supported and Effects 00:18:28.239 ============================== 00:18:28.239 Admin Commands 00:18:28.239 -------------- 00:18:28.239 Get Log Page (02h): Supported 00:18:28.239 Identify (06h): Supported 00:18:28.239 Abort (08h): Supported 00:18:28.239 Set Features (09h): Supported 00:18:28.239 Get Features (0Ah): Supported 00:18:28.239 Asynchronous Event Request (0Ch): Supported 00:18:28.239 Keep Alive (18h): Supported 00:18:28.239 I/O Commands 00:18:28.239 ------------ 00:18:28.239 Flush (00h): Supported LBA-Change 00:18:28.239 Write (01h): Supported LBA-Change 00:18:28.239 Read (02h): Supported 00:18:28.239 Compare (05h): Supported 00:18:28.239 Write Zeroes (08h): Supported LBA-Change 00:18:28.239 Dataset Management (09h): Supported LBA-Change 00:18:28.239 Copy (19h): Supported LBA-Change 00:18:28.239 Unknown (79h): Supported LBA-Change 00:18:28.239 Unknown (7Ah): Supported 00:18:28.239 00:18:28.239 Error Log 00:18:28.239 ========= 00:18:28.239 00:18:28.239 Arbitration 00:18:28.239 =========== 00:18:28.239 Arbitration Burst: 1 00:18:28.239 00:18:28.239 Power Management 00:18:28.239 ================ 00:18:28.239 Number of Power States: 1 00:18:28.239 Current Power State: Power State #0 00:18:28.239 Power State #0: 00:18:28.239 Max Power: 0.00 W 00:18:28.239 Non-Operational State: Operational 00:18:28.239 Entry Latency: Not Reported 00:18:28.239 Exit Latency: Not Reported 00:18:28.239 Relative Read Throughput: 0 00:18:28.239 Relative Read Latency: 0 00:18:28.239 Relative Write Throughput: 0 00:18:28.239 Relative Write Latency: 0 00:18:28.239 Idle Power: Not Reported 00:18:28.239 Active Power: Not Reported 00:18:28.239 Non-Operational Permissive Mode: Not Supported 00:18:28.239 00:18:28.239 Health Information 00:18:28.239 ================== 00:18:28.239 Critical Warnings: 00:18:28.239 Available Spare Space: OK 00:18:28.239 Temperature: OK 00:18:28.239 Device Reliability: OK 00:18:28.239 Read Only: No 00:18:28.239 Volatile Memory Backup: OK 00:18:28.239 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:28.239 Temperature Threshold: [2024-04-24 17:22:37.284183] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c80 length 0x40 lkey 0x182f00 00:18:28.239 [2024-04-24 17:22:37.284189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.239 [2024-04-24 17:22:37.284211] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.239 [2024-04-24 17:22:37.284216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:28.239 [2024-04-24 17:22:37.284220] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x182f00 00:18:28.239 [2024-04-24 17:22:37.284241] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:18:28.239 [2024-04-24 17:22:37.284247] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10055 doesn't match qid 00:18:28.239 [2024-04-24 17:22:37.284260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:5 sqhd:8790 p:0 m:0 dnr:0 00:18:28.239 [2024-04-24 17:22:37.284264] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10055 doesn't match qid 00:18:28.239 [2024-04-24 17:22:37.284270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:5 sqhd:8790 p:0 m:0 dnr:0 00:18:28.239 [2024-04-24 17:22:37.284276] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10055 doesn't match qid 00:18:28.239 [2024-04-24 17:22:37.284282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:5 sqhd:8790 p:0 m:0 dnr:0 00:18:28.239 [2024-04-24 17:22:37.284286] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10055 doesn't match qid 00:18:28.239 [2024-04-24 17:22:37.284292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32729 cdw0:5 sqhd:8790 p:0 m:0 dnr:0 00:18:28.239 [2024-04-24 17:22:37.284299] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x182f00 00:18:28.239 [2024-04-24 17:22:37.284305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.239 [2024-04-24 17:22:37.284324] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.239 [2024-04-24 17:22:37.284329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:18:28.239 [2024-04-24 17:22:37.284335] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.239 [2024-04-24 17:22:37.284341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.239 [2024-04-24 17:22:37.284345] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x182f00 00:18:28.239 [2024-04-24 17:22:37.284365] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.239 [2024-04-24 17:22:37.284369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:28.239 [2024-04-24 17:22:37.284373] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:18:28.239 [2024-04-24 17:22:37.284377] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:18:28.239 [2024-04-24 17:22:37.284381] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x182f00 00:18:28.239 [2024-04-24 17:22:37.284388] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.239 [2024-04-24 17:22:37.284393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.239 [2024-04-24 17:22:37.284409] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.239 [2024-04-24 17:22:37.284413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:28.239 [2024-04-24 17:22:37.284418] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x182f00 00:18:28.239 [2024-04-24 17:22:37.284425] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.239 [2024-04-24 17:22:37.284431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.239 [2024-04-24 17:22:37.284449] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.239 [2024-04-24 17:22:37.284453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:28.239 [2024-04-24 17:22:37.284458] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x182f00 00:18:28.239 [2024-04-24 17:22:37.284466] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.239 [2024-04-24 17:22:37.284472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.239 [2024-04-24 17:22:37.284489] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.239 [2024-04-24 17:22:37.284493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:28.239 [2024-04-24 17:22:37.284500] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x182f00 00:18:28.239 [2024-04-24 17:22:37.284507] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.239 [2024-04-24 17:22:37.284512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.239 [2024-04-24 17:22:37.284529] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.239 [2024-04-24 17:22:37.284534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:28.239 [2024-04-24 17:22:37.284539] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284546] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.284573] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.284578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:28.240 [2024-04-24 17:22:37.284584] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284593] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.284617] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.284621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:28.240 [2024-04-24 17:22:37.284628] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284635] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.284658] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.284663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:28.240 [2024-04-24 17:22:37.284667] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284674] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.284698] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.284702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:28.240 [2024-04-24 17:22:37.284706] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284713] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.284740] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.284744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:28.240 [2024-04-24 17:22:37.284750] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284757] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.284783] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.284787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:28.240 [2024-04-24 17:22:37.284791] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284798] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.284820] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.284824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:28.240 [2024-04-24 17:22:37.284833] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284840] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.284862] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.284866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:28.240 [2024-04-24 17:22:37.284870] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284877] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.284906] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.284910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:28.240 [2024-04-24 17:22:37.284915] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284922] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.284946] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.284951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:18:28.240 [2024-04-24 17:22:37.284955] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284961] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.284967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.284983] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.284990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:18:28.240 [2024-04-24 17:22:37.284995] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.285002] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.285007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.285026] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.285031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:18:28.240 [2024-04-24 17:22:37.285035] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.285042] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.285048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.285068] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.285072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:18:28.240 [2024-04-24 17:22:37.285077] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.285083] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.285089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.285108] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.285112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:18:28.240 [2024-04-24 17:22:37.285116] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.285123] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.240 [2024-04-24 17:22:37.285129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.240 [2024-04-24 17:22:37.285149] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.240 [2024-04-24 17:22:37.285153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285158] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285164] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285194] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285202] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285209] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285234] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285242] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285249] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285271] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285279] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285286] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285308] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285316] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285323] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285348] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285356] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285363] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285391] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285399] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285406] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285426] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285435] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285441] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285463] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285472] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285478] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285499] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285508] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285514] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285537] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285545] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285552] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285581] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285589] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285596] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285617] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285625] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285632] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285652] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285660] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285667] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285693] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285701] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285708] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285736] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285744] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285751] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285774] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285783] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285789] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285811] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.241 [2024-04-24 17:22:37.285815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:28.241 [2024-04-24 17:22:37.285820] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285831] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.241 [2024-04-24 17:22:37.285837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.241 [2024-04-24 17:22:37.285859] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.285864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.285868] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.285875] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.285880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.285895] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.285899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.285904] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.285910] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.285917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.285938] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.285942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.285946] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.285953] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.285959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.285981] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.285985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.285989] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.285996] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286019] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.286023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.286027] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286034] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286059] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.286063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.286067] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286074] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286097] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.286101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.286106] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286112] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286136] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.286140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.286144] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286151] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286176] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.286180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.286184] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286191] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286214] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.286218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.286223] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286229] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286250] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.286254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.286258] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286265] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286293] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.286297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.286301] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286308] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286336] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.286340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.286344] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286351] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286372] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.286376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.286380] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286388] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286412] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.286416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.286420] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286427] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286450] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.286454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.286459] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286466] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286486] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.286490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.286495] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286501] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286522] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.242 [2024-04-24 17:22:37.286526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:18:28.242 [2024-04-24 17:22:37.286530] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286537] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.242 [2024-04-24 17:22:37.286543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.242 [2024-04-24 17:22:37.286566] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.243 [2024-04-24 17:22:37.286570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:18:28.243 [2024-04-24 17:22:37.286575] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.286581] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.286587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.243 [2024-04-24 17:22:37.286603] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.243 [2024-04-24 17:22:37.286607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:28.243 [2024-04-24 17:22:37.286612] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.286620] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.286626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.243 [2024-04-24 17:22:37.286649] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.243 [2024-04-24 17:22:37.286653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:28.243 [2024-04-24 17:22:37.286657] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.286664] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.286670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.243 [2024-04-24 17:22:37.286689] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.243 [2024-04-24 17:22:37.286693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:28.243 [2024-04-24 17:22:37.286697] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.286704] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.286710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.243 [2024-04-24 17:22:37.286725] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.243 [2024-04-24 17:22:37.286729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:28.243 [2024-04-24 17:22:37.286733] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.286740] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.286746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.243 [2024-04-24 17:22:37.286765] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.243 [2024-04-24 17:22:37.286769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:28.243 [2024-04-24 17:22:37.286773] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.286780] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.286786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.243 [2024-04-24 17:22:37.286800] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.243 [2024-04-24 17:22:37.286805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:28.243 [2024-04-24 17:22:37.286809] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.286816] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.286821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.243 [2024-04-24 17:22:37.290832] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.243 [2024-04-24 17:22:37.290838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:28.243 [2024-04-24 17:22:37.290844] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.290851] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.290857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:28.243 [2024-04-24 17:22:37.290879] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:28.243 [2024-04-24 17:22:37.290883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0019 p:0 m:0 dnr:0 00:18:28.243 [2024-04-24 17:22:37.290887] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x182f00 00:18:28.243 [2024-04-24 17:22:37.290892] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:18:28.243 0 Kelvin (-273 Celsius) 00:18:28.243 Available Spare: 0% 00:18:28.243 Available Spare Threshold: 0% 00:18:28.243 Life Percentage Used: 0% 00:18:28.243 Data Units Read: 0 00:18:28.243 Data Units Written: 0 00:18:28.243 Host Read Commands: 0 00:18:28.243 Host Write Commands: 0 00:18:28.243 Controller Busy Time: 0 minutes 00:18:28.243 Power Cycles: 0 00:18:28.243 Power On Hours: 0 hours 00:18:28.243 Unsafe Shutdowns: 0 00:18:28.243 Unrecoverable Media Errors: 0 00:18:28.243 Lifetime Error Log Entries: 0 00:18:28.243 Warning Temperature Time: 0 minutes 00:18:28.243 Critical Temperature Time: 0 minutes 00:18:28.243 00:18:28.243 Number of Queues 00:18:28.243 ================ 00:18:28.243 Number of I/O Submission Queues: 127 00:18:28.243 Number of I/O Completion Queues: 127 00:18:28.243 00:18:28.243 Active Namespaces 00:18:28.243 ================= 00:18:28.243 Namespace ID:1 00:18:28.243 Error Recovery Timeout: Unlimited 00:18:28.243 Command Set Identifier: NVM (00h) 00:18:28.243 Deallocate: Supported 00:18:28.243 Deallocated/Unwritten Error: Not Supported 00:18:28.243 Deallocated Read Value: Unknown 00:18:28.243 Deallocate in Write Zeroes: Not Supported 00:18:28.243 Deallocated Guard Field: 0xFFFF 00:18:28.243 Flush: Supported 00:18:28.243 Reservation: Supported 00:18:28.243 Namespace Sharing Capabilities: Multiple Controllers 00:18:28.243 Size (in LBAs): 131072 (0GiB) 00:18:28.243 Capacity (in LBAs): 131072 (0GiB) 00:18:28.243 Utilization (in LBAs): 131072 (0GiB) 00:18:28.243 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:28.243 EUI64: ABCDEF0123456789 00:18:28.243 UUID: 3bee7370-a392-471f-9b90-f1e174a5d1d0 00:18:28.243 Thin Provisioning: Not Supported 00:18:28.243 Per-NS Atomic Units: Yes 00:18:28.243 Atomic Boundary Size (Normal): 0 00:18:28.243 Atomic Boundary Size (PFail): 0 00:18:28.243 Atomic Boundary Offset: 0 00:18:28.243 Maximum Single Source Range Length: 65535 00:18:28.243 Maximum Copy Length: 65535 00:18:28.243 Maximum Source Range Count: 1 00:18:28.243 NGUID/EUI64 Never Reused: No 00:18:28.243 Namespace Write Protected: No 00:18:28.243 Number of LBA Formats: 1 00:18:28.243 Current LBA Format: LBA Format #00 00:18:28.243 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:28.243 00:18:28.243 17:22:37 -- host/identify.sh@51 -- # sync 00:18:28.243 17:22:37 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.243 17:22:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.243 17:22:37 -- common/autotest_common.sh@10 -- # set +x 00:18:28.243 17:22:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.243 17:22:37 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:28.243 17:22:37 -- host/identify.sh@56 -- # nvmftestfini 00:18:28.243 17:22:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:28.243 17:22:37 -- nvmf/common.sh@117 -- # sync 00:18:28.243 17:22:37 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:28.243 17:22:37 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:28.243 17:22:37 -- nvmf/common.sh@120 -- # set +e 00:18:28.243 17:22:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.243 17:22:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:28.243 rmmod nvme_rdma 00:18:28.243 rmmod nvme_fabrics 00:18:28.243 17:22:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:28.243 17:22:37 -- nvmf/common.sh@124 -- # set -e 00:18:28.243 17:22:37 -- nvmf/common.sh@125 -- # return 0 00:18:28.243 17:22:37 -- nvmf/common.sh@478 -- # '[' -n 3037607 ']' 00:18:28.243 17:22:37 -- nvmf/common.sh@479 -- # killprocess 3037607 00:18:28.243 17:22:37 -- common/autotest_common.sh@936 -- # '[' -z 3037607 ']' 00:18:28.243 17:22:37 -- common/autotest_common.sh@940 -- # kill -0 3037607 00:18:28.243 17:22:37 -- common/autotest_common.sh@941 -- # uname 00:18:28.243 17:22:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:28.243 17:22:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3037607 00:18:28.243 17:22:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:28.243 17:22:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:28.243 17:22:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3037607' 00:18:28.243 killing process with pid 3037607 00:18:28.243 17:22:37 -- common/autotest_common.sh@955 -- # kill 3037607 00:18:28.243 [2024-04-24 17:22:37.431592] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:18:28.243 17:22:37 -- common/autotest_common.sh@960 -- # wait 3037607 00:18:28.501 17:22:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:28.501 17:22:37 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:28.501 00:18:28.501 real 0m6.453s 00:18:28.501 user 0m7.507s 00:18:28.501 sys 0m3.805s 00:18:28.501 17:22:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:28.501 17:22:37 -- common/autotest_common.sh@10 -- # set +x 00:18:28.501 ************************************ 00:18:28.501 END TEST nvmf_identify 00:18:28.501 ************************************ 00:18:28.759 17:22:37 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:18:28.759 17:22:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:28.759 17:22:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:28.759 17:22:37 -- common/autotest_common.sh@10 -- # set +x 00:18:28.759 ************************************ 00:18:28.759 START TEST nvmf_perf 00:18:28.759 ************************************ 00:18:28.759 17:22:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:18:28.759 * Looking for test storage... 00:18:28.759 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:28.759 17:22:37 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:28.759 17:22:37 -- nvmf/common.sh@7 -- # uname -s 00:18:28.759 17:22:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.759 17:22:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.759 17:22:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.759 17:22:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.759 17:22:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.759 17:22:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.759 17:22:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.759 17:22:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.759 17:22:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.759 17:22:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.759 17:22:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:28.759 17:22:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:18:28.759 17:22:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.759 17:22:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.759 17:22:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:28.759 17:22:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.759 17:22:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:28.759 17:22:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.759 17:22:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.759 17:22:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.759 17:22:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.759 17:22:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.759 17:22:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.759 17:22:37 -- paths/export.sh@5 -- # export PATH 00:18:28.759 17:22:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.759 17:22:37 -- nvmf/common.sh@47 -- # : 0 00:18:28.759 17:22:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:28.759 17:22:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:28.759 17:22:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.759 17:22:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.759 17:22:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.760 17:22:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:28.760 17:22:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:28.760 17:22:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:28.760 17:22:37 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:28.760 17:22:37 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:28.760 17:22:37 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:28.760 17:22:37 -- host/perf.sh@17 -- # nvmftestinit 00:18:28.760 17:22:37 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:28.760 17:22:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.760 17:22:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:28.760 17:22:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:28.760 17:22:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:28.760 17:22:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.760 17:22:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.760 17:22:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.760 17:22:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:28.760 17:22:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:28.760 17:22:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:28.760 17:22:37 -- common/autotest_common.sh@10 -- # set +x 00:18:34.023 17:22:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:34.023 17:22:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:34.023 17:22:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:34.023 17:22:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:34.023 17:22:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:34.023 17:22:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:34.023 17:22:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:34.023 17:22:42 -- nvmf/common.sh@295 -- # net_devs=() 00:18:34.023 17:22:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:34.023 17:22:42 -- nvmf/common.sh@296 -- # e810=() 00:18:34.023 17:22:42 -- nvmf/common.sh@296 -- # local -ga e810 00:18:34.023 17:22:42 -- nvmf/common.sh@297 -- # x722=() 00:18:34.023 17:22:42 -- nvmf/common.sh@297 -- # local -ga x722 00:18:34.023 17:22:42 -- nvmf/common.sh@298 -- # mlx=() 00:18:34.023 17:22:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:34.023 17:22:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:34.023 17:22:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:34.023 17:22:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:34.023 17:22:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:34.023 17:22:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:34.023 17:22:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:34.023 17:22:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:34.023 17:22:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:34.023 17:22:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:34.023 17:22:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:34.023 17:22:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:34.023 17:22:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:34.023 17:22:42 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:34.023 17:22:42 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:34.023 17:22:42 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:34.023 17:22:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:34.023 17:22:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.023 17:22:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:18:34.023 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:18:34.023 17:22:42 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:34.023 17:22:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.023 17:22:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:18:34.023 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:18:34.023 17:22:42 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:34.023 17:22:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:34.023 17:22:42 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.023 17:22:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.023 17:22:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:34.023 17:22:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.023 17:22:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:18:34.023 Found net devices under 0000:da:00.0: mlx_0_0 00:18:34.023 17:22:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.023 17:22:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.023 17:22:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.023 17:22:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:34.023 17:22:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.023 17:22:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:18:34.023 Found net devices under 0000:da:00.1: mlx_0_1 00:18:34.023 17:22:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.023 17:22:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:34.023 17:22:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:34.023 17:22:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:34.023 17:22:42 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:34.023 17:22:42 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:34.023 17:22:42 -- nvmf/common.sh@58 -- # uname 00:18:34.023 17:22:42 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:34.023 17:22:42 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:34.023 17:22:42 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:34.023 17:22:42 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:34.023 17:22:42 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:34.023 17:22:42 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:34.023 17:22:42 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:34.023 17:22:43 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:34.023 17:22:43 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:34.023 17:22:43 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:34.023 17:22:43 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:34.023 17:22:43 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:34.023 17:22:43 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:34.023 17:22:43 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:34.023 17:22:43 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:34.024 17:22:43 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:34.024 17:22:43 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.024 17:22:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.024 17:22:43 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:34.024 17:22:43 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:34.024 17:22:43 -- nvmf/common.sh@105 -- # continue 2 00:18:34.024 17:22:43 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.024 17:22:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.024 17:22:43 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:34.024 17:22:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.024 17:22:43 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:34.024 17:22:43 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:34.024 17:22:43 -- nvmf/common.sh@105 -- # continue 2 00:18:34.024 17:22:43 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:34.024 17:22:43 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:34.024 17:22:43 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:34.024 17:22:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.024 17:22:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:34.024 17:22:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.024 17:22:43 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:34.024 17:22:43 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:34.024 17:22:43 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:34.024 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:34.024 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:18:34.024 altname enp218s0f0np0 00:18:34.024 altname ens818f0np0 00:18:34.024 inet 192.168.100.8/24 scope global mlx_0_0 00:18:34.024 valid_lft forever preferred_lft forever 00:18:34.024 17:22:43 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:34.024 17:22:43 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:34.024 17:22:43 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:34.024 17:22:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:34.024 17:22:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.024 17:22:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.024 17:22:43 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:34.024 17:22:43 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:34.024 17:22:43 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:34.024 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:34.024 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:18:34.024 altname enp218s0f1np1 00:18:34.024 altname ens818f1np1 00:18:34.024 inet 192.168.100.9/24 scope global mlx_0_1 00:18:34.024 valid_lft forever preferred_lft forever 00:18:34.024 17:22:43 -- nvmf/common.sh@411 -- # return 0 00:18:34.024 17:22:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:34.024 17:22:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:34.024 17:22:43 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:34.024 17:22:43 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:34.024 17:22:43 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:34.024 17:22:43 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:34.024 17:22:43 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:34.024 17:22:43 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:34.024 17:22:43 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:34.024 17:22:43 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:34.024 17:22:43 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.024 17:22:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.024 17:22:43 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:34.024 17:22:43 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:34.024 17:22:43 -- nvmf/common.sh@105 -- # continue 2 00:18:34.024 17:22:43 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.024 17:22:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.024 17:22:43 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:34.024 17:22:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.024 17:22:43 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:34.024 17:22:43 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:34.024 17:22:43 -- nvmf/common.sh@105 -- # continue 2 00:18:34.024 17:22:43 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:34.024 17:22:43 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:34.024 17:22:43 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:34.024 17:22:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:34.024 17:22:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.024 17:22:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.024 17:22:43 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:34.024 17:22:43 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:34.024 17:22:43 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:34.024 17:22:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.024 17:22:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:34.024 17:22:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.024 17:22:43 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:34.024 192.168.100.9' 00:18:34.024 17:22:43 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:34.024 192.168.100.9' 00:18:34.024 17:22:43 -- nvmf/common.sh@446 -- # head -n 1 00:18:34.024 17:22:43 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:34.024 17:22:43 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:34.024 192.168.100.9' 00:18:34.024 17:22:43 -- nvmf/common.sh@447 -- # tail -n +2 00:18:34.024 17:22:43 -- nvmf/common.sh@447 -- # head -n 1 00:18:34.024 17:22:43 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:34.024 17:22:43 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:34.024 17:22:43 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:34.024 17:22:43 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:34.024 17:22:43 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:34.024 17:22:43 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:34.024 17:22:43 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:34.024 17:22:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:34.024 17:22:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:34.024 17:22:43 -- common/autotest_common.sh@10 -- # set +x 00:18:34.024 17:22:43 -- nvmf/common.sh@470 -- # nvmfpid=3039868 00:18:34.024 17:22:43 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:34.024 17:22:43 -- nvmf/common.sh@471 -- # waitforlisten 3039868 00:18:34.024 17:22:43 -- common/autotest_common.sh@817 -- # '[' -z 3039868 ']' 00:18:34.024 17:22:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.024 17:22:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:34.024 17:22:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.024 17:22:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:34.024 17:22:43 -- common/autotest_common.sh@10 -- # set +x 00:18:34.024 [2024-04-24 17:22:43.198546] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:18:34.024 [2024-04-24 17:22:43.198589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.024 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.024 [2024-04-24 17:22:43.253757] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:34.282 [2024-04-24 17:22:43.332228] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.282 [2024-04-24 17:22:43.332264] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.282 [2024-04-24 17:22:43.332273] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.282 [2024-04-24 17:22:43.332279] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.282 [2024-04-24 17:22:43.332284] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.282 [2024-04-24 17:22:43.332330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.282 [2024-04-24 17:22:43.332426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.282 [2024-04-24 17:22:43.332513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:34.282 [2024-04-24 17:22:43.332515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.847 17:22:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:34.847 17:22:43 -- common/autotest_common.sh@850 -- # return 0 00:18:34.847 17:22:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:34.847 17:22:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:34.847 17:22:43 -- common/autotest_common.sh@10 -- # set +x 00:18:34.847 17:22:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.847 17:22:44 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:18:34.847 17:22:44 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:18:38.126 17:22:47 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:18:38.126 17:22:47 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:38.126 17:22:47 -- host/perf.sh@30 -- # local_nvme_trid=0000:5f:00.0 00:18:38.126 17:22:47 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:38.384 17:22:47 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:38.384 17:22:47 -- host/perf.sh@33 -- # '[' -n 0000:5f:00.0 ']' 00:18:38.384 17:22:47 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:38.384 17:22:47 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:18:38.384 17:22:47 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:18:38.384 [2024-04-24 17:22:47.561917] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:18:38.384 [2024-04-24 17:22:47.582229] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5b02c0/0x6de000) succeed. 00:18:38.384 [2024-04-24 17:22:47.592671] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5b18b0/0x5bdf00) succeed. 00:18:38.641 17:22:47 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:38.900 17:22:47 -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:38.900 17:22:47 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:38.900 17:22:48 -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:38.900 17:22:48 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:39.158 17:22:48 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:39.158 [2024-04-24 17:22:48.404804] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:39.416 17:22:48 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:39.416 17:22:48 -- host/perf.sh@52 -- # '[' -n 0000:5f:00.0 ']' 00:18:39.416 17:22:48 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:18:39.416 17:22:48 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:39.416 17:22:48 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:18:40.790 Initializing NVMe Controllers 00:18:40.790 Attached to NVMe Controller at 0000:5f:00.0 [8086:0a54] 00:18:40.790 Associating PCIE (0000:5f:00.0) NSID 1 with lcore 0 00:18:40.790 Initialization complete. Launching workers. 00:18:40.790 ======================================================== 00:18:40.790 Latency(us) 00:18:40.790 Device Information : IOPS MiB/s Average min max 00:18:40.790 PCIE (0000:5f:00.0) NSID 1 from core 0: 99724.62 389.55 320.50 38.41 5209.53 00:18:40.790 ======================================================== 00:18:40.790 Total : 99724.62 389.55 320.50 38.41 5209.53 00:18:40.790 00:18:40.790 17:22:49 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:18:40.790 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.069 Initializing NVMe Controllers 00:18:44.069 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:44.069 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:44.069 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:44.069 Initialization complete. Launching workers. 00:18:44.069 ======================================================== 00:18:44.069 Latency(us) 00:18:44.069 Device Information : IOPS MiB/s Average min max 00:18:44.069 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6858.99 26.79 145.59 46.38 4248.88 00:18:44.069 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5356.99 20.93 186.47 71.03 4248.49 00:18:44.069 ======================================================== 00:18:44.069 Total : 12215.98 47.72 163.52 46.38 4248.88 00:18:44.069 00:18:44.069 17:22:53 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:18:44.069 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.347 Initializing NVMe Controllers 00:18:47.347 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:47.347 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:47.347 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:47.347 Initialization complete. Launching workers. 00:18:47.347 ======================================================== 00:18:47.347 Latency(us) 00:18:47.347 Device Information : IOPS MiB/s Average min max 00:18:47.347 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18446.08 72.06 1734.32 461.28 7089.12 00:18:47.347 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4042.23 15.79 7977.08 5420.82 14828.05 00:18:47.347 ======================================================== 00:18:47.347 Total : 22488.31 87.84 2856.45 461.28 14828.05 00:18:47.347 00:18:47.347 17:22:56 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:18:47.347 17:22:56 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:18:47.604 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.833 Initializing NVMe Controllers 00:18:51.833 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:51.833 Controller IO queue size 128, less than required. 00:18:51.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:51.833 Controller IO queue size 128, less than required. 00:18:51.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:51.833 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:51.833 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:51.833 Initialization complete. Launching workers. 00:18:51.833 ======================================================== 00:18:51.833 Latency(us) 00:18:51.833 Device Information : IOPS MiB/s Average min max 00:18:51.833 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3499.18 874.80 36629.89 15344.65 87151.87 00:18:51.833 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3644.15 911.04 34807.14 15432.58 60484.91 00:18:51.833 ======================================================== 00:18:51.833 Total : 7143.33 1785.83 35700.02 15344.65 87151.87 00:18:51.833 00:18:51.833 17:23:00 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:18:51.833 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.091 No valid NVMe controllers or AIO or URING devices found 00:18:52.091 Initializing NVMe Controllers 00:18:52.091 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:52.091 Controller IO queue size 128, less than required. 00:18:52.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:52.091 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:52.091 Controller IO queue size 128, less than required. 00:18:52.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:52.091 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:18:52.091 WARNING: Some requested NVMe devices were skipped 00:18:52.091 17:23:01 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:18:52.091 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.359 Initializing NVMe Controllers 00:18:57.359 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:57.359 Controller IO queue size 128, less than required. 00:18:57.359 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:57.359 Controller IO queue size 128, less than required. 00:18:57.359 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:57.359 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:57.359 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:57.359 Initialization complete. Launching workers. 00:18:57.359 00:18:57.359 ==================== 00:18:57.359 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:57.359 RDMA transport: 00:18:57.359 dev name: mlx5_0 00:18:57.359 polls: 397988 00:18:57.359 idle_polls: 394848 00:18:57.359 completions: 44070 00:18:57.359 queued_requests: 1 00:18:57.359 total_send_wrs: 22035 00:18:57.359 send_doorbell_updates: 2901 00:18:57.359 total_recv_wrs: 22162 00:18:57.359 recv_doorbell_updates: 2902 00:18:57.359 --------------------------------- 00:18:57.359 00:18:57.359 ==================== 00:18:57.359 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:57.359 RDMA transport: 00:18:57.359 dev name: mlx5_0 00:18:57.359 polls: 401755 00:18:57.359 idle_polls: 401479 00:18:57.359 completions: 20214 00:18:57.359 queued_requests: 1 00:18:57.359 total_send_wrs: 10107 00:18:57.359 send_doorbell_updates: 255 00:18:57.359 total_recv_wrs: 10234 00:18:57.359 recv_doorbell_updates: 256 00:18:57.359 --------------------------------- 00:18:57.359 ======================================================== 00:18:57.359 Latency(us) 00:18:57.359 Device Information : IOPS MiB/s Average min max 00:18:57.359 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5508.50 1377.12 23270.42 10945.53 66327.06 00:18:57.360 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2526.50 631.62 50461.49 30723.28 73720.70 00:18:57.360 ======================================================== 00:18:57.360 Total : 8035.00 2008.75 31820.29 10945.53 73720.70 00:18:57.360 00:18:57.360 17:23:05 -- host/perf.sh@66 -- # sync 00:18:57.360 17:23:05 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.360 17:23:05 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:57.360 17:23:05 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:57.360 17:23:05 -- host/perf.sh@114 -- # nvmftestfini 00:18:57.360 17:23:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:57.360 17:23:05 -- nvmf/common.sh@117 -- # sync 00:18:57.360 17:23:05 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:57.360 17:23:05 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:57.360 17:23:05 -- nvmf/common.sh@120 -- # set +e 00:18:57.360 17:23:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:57.360 17:23:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:57.360 rmmod nvme_rdma 00:18:57.360 rmmod nvme_fabrics 00:18:57.360 17:23:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:57.360 17:23:05 -- nvmf/common.sh@124 -- # set -e 00:18:57.360 17:23:05 -- nvmf/common.sh@125 -- # return 0 00:18:57.360 17:23:05 -- nvmf/common.sh@478 -- # '[' -n 3039868 ']' 00:18:57.360 17:23:05 -- nvmf/common.sh@479 -- # killprocess 3039868 00:18:57.360 17:23:05 -- common/autotest_common.sh@936 -- # '[' -z 3039868 ']' 00:18:57.360 17:23:05 -- common/autotest_common.sh@940 -- # kill -0 3039868 00:18:57.360 17:23:05 -- common/autotest_common.sh@941 -- # uname 00:18:57.360 17:23:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:57.360 17:23:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3039868 00:18:57.360 17:23:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:57.360 17:23:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:57.360 17:23:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3039868' 00:18:57.360 killing process with pid 3039868 00:18:57.360 17:23:05 -- common/autotest_common.sh@955 -- # kill 3039868 00:18:57.360 17:23:05 -- common/autotest_common.sh@960 -- # wait 3039868 00:18:59.263 17:23:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:59.263 17:23:08 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:59.263 00:18:59.263 real 0m30.234s 00:18:59.263 user 1m41.011s 00:18:59.263 sys 0m4.806s 00:18:59.263 17:23:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:59.263 17:23:08 -- common/autotest_common.sh@10 -- # set +x 00:18:59.263 ************************************ 00:18:59.263 END TEST nvmf_perf 00:18:59.263 ************************************ 00:18:59.263 17:23:08 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:18:59.263 17:23:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:59.263 17:23:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:59.263 17:23:08 -- common/autotest_common.sh@10 -- # set +x 00:18:59.263 ************************************ 00:18:59.263 START TEST nvmf_fio_host 00:18:59.263 ************************************ 00:18:59.263 17:23:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:18:59.263 * Looking for test storage... 00:18:59.263 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:59.263 17:23:08 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:59.263 17:23:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.263 17:23:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.263 17:23:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.264 17:23:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.264 17:23:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.264 17:23:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.264 17:23:08 -- paths/export.sh@5 -- # export PATH 00:18:59.264 17:23:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.264 17:23:08 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.264 17:23:08 -- nvmf/common.sh@7 -- # uname -s 00:18:59.264 17:23:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.264 17:23:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.264 17:23:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.264 17:23:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.264 17:23:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.264 17:23:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.264 17:23:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.264 17:23:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.264 17:23:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.264 17:23:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.264 17:23:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:59.264 17:23:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:18:59.264 17:23:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.264 17:23:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.264 17:23:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.264 17:23:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.264 17:23:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:59.264 17:23:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.264 17:23:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.264 17:23:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.264 17:23:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.264 17:23:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.264 17:23:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.264 17:23:08 -- paths/export.sh@5 -- # export PATH 00:18:59.264 17:23:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.264 17:23:08 -- nvmf/common.sh@47 -- # : 0 00:18:59.264 17:23:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:59.264 17:23:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:59.264 17:23:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.264 17:23:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.264 17:23:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.264 17:23:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:59.264 17:23:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:59.264 17:23:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:59.264 17:23:08 -- host/fio.sh@12 -- # nvmftestinit 00:18:59.264 17:23:08 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:59.264 17:23:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.264 17:23:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:59.264 17:23:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:59.264 17:23:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:59.264 17:23:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.264 17:23:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.264 17:23:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.264 17:23:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:59.264 17:23:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:59.264 17:23:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:59.264 17:23:08 -- common/autotest_common.sh@10 -- # set +x 00:19:04.590 17:23:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:04.590 17:23:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:04.590 17:23:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:04.590 17:23:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:04.590 17:23:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:04.590 17:23:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:04.590 17:23:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:04.590 17:23:13 -- nvmf/common.sh@295 -- # net_devs=() 00:19:04.590 17:23:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:04.590 17:23:13 -- nvmf/common.sh@296 -- # e810=() 00:19:04.590 17:23:13 -- nvmf/common.sh@296 -- # local -ga e810 00:19:04.590 17:23:13 -- nvmf/common.sh@297 -- # x722=() 00:19:04.590 17:23:13 -- nvmf/common.sh@297 -- # local -ga x722 00:19:04.590 17:23:13 -- nvmf/common.sh@298 -- # mlx=() 00:19:04.590 17:23:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:04.590 17:23:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.590 17:23:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.590 17:23:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.590 17:23:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.590 17:23:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.590 17:23:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.590 17:23:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.590 17:23:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.590 17:23:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.590 17:23:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.590 17:23:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.590 17:23:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:04.590 17:23:13 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:04.590 17:23:13 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:04.590 17:23:13 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:04.590 17:23:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:04.590 17:23:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.590 17:23:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:04.590 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:04.590 17:23:13 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:04.590 17:23:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.590 17:23:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:04.590 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:04.590 17:23:13 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:04.590 17:23:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:04.590 17:23:13 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.590 17:23:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.590 17:23:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:04.590 17:23:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.590 17:23:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:04.590 Found net devices under 0000:da:00.0: mlx_0_0 00:19:04.590 17:23:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.590 17:23:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.590 17:23:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.590 17:23:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:04.590 17:23:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.590 17:23:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:04.590 Found net devices under 0000:da:00.1: mlx_0_1 00:19:04.590 17:23:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.590 17:23:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:04.590 17:23:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:04.590 17:23:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:04.590 17:23:13 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:04.590 17:23:13 -- nvmf/common.sh@58 -- # uname 00:19:04.590 17:23:13 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:04.590 17:23:13 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:04.590 17:23:13 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:04.590 17:23:13 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:04.590 17:23:13 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:04.590 17:23:13 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:04.590 17:23:13 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:04.590 17:23:13 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:04.590 17:23:13 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:04.590 17:23:13 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:04.590 17:23:13 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:04.590 17:23:13 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:04.590 17:23:13 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:04.590 17:23:13 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:04.590 17:23:13 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:04.590 17:23:13 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:04.590 17:23:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:04.590 17:23:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.590 17:23:13 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:04.590 17:23:13 -- nvmf/common.sh@105 -- # continue 2 00:19:04.590 17:23:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:04.590 17:23:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.590 17:23:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.590 17:23:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:04.590 17:23:13 -- nvmf/common.sh@105 -- # continue 2 00:19:04.590 17:23:13 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:04.590 17:23:13 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:04.590 17:23:13 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:04.590 17:23:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:04.590 17:23:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:04.590 17:23:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:04.590 17:23:13 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:04.590 17:23:13 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:04.590 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:04.590 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:19:04.590 altname enp218s0f0np0 00:19:04.590 altname ens818f0np0 00:19:04.590 inet 192.168.100.8/24 scope global mlx_0_0 00:19:04.590 valid_lft forever preferred_lft forever 00:19:04.590 17:23:13 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:04.590 17:23:13 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:04.590 17:23:13 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:04.590 17:23:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:04.590 17:23:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:04.590 17:23:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:04.590 17:23:13 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:04.590 17:23:13 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:04.590 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:04.590 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:19:04.590 altname enp218s0f1np1 00:19:04.590 altname ens818f1np1 00:19:04.590 inet 192.168.100.9/24 scope global mlx_0_1 00:19:04.590 valid_lft forever preferred_lft forever 00:19:04.590 17:23:13 -- nvmf/common.sh@411 -- # return 0 00:19:04.590 17:23:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:04.590 17:23:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:04.590 17:23:13 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:04.590 17:23:13 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:04.590 17:23:13 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:04.591 17:23:13 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:04.591 17:23:13 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:04.591 17:23:13 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:04.591 17:23:13 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:04.591 17:23:13 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:04.591 17:23:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:04.591 17:23:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.591 17:23:13 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:04.591 17:23:13 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:04.591 17:23:13 -- nvmf/common.sh@105 -- # continue 2 00:19:04.591 17:23:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:04.591 17:23:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.591 17:23:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:04.591 17:23:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.591 17:23:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:04.591 17:23:13 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:04.591 17:23:13 -- nvmf/common.sh@105 -- # continue 2 00:19:04.591 17:23:13 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:04.591 17:23:13 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:04.591 17:23:13 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:04.591 17:23:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:04.591 17:23:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:04.591 17:23:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:04.591 17:23:13 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:04.591 17:23:13 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:04.591 17:23:13 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:04.591 17:23:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:04.591 17:23:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:04.591 17:23:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:04.591 17:23:13 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:04.591 192.168.100.9' 00:19:04.591 17:23:13 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:04.591 192.168.100.9' 00:19:04.591 17:23:13 -- nvmf/common.sh@446 -- # head -n 1 00:19:04.591 17:23:13 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:04.591 17:23:13 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:04.591 192.168.100.9' 00:19:04.591 17:23:13 -- nvmf/common.sh@447 -- # tail -n +2 00:19:04.591 17:23:13 -- nvmf/common.sh@447 -- # head -n 1 00:19:04.591 17:23:13 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:04.591 17:23:13 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:04.591 17:23:13 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:04.591 17:23:13 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:04.591 17:23:13 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:04.591 17:23:13 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:04.591 17:23:13 -- host/fio.sh@14 -- # [[ y != y ]] 00:19:04.591 17:23:13 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:19:04.591 17:23:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:04.591 17:23:13 -- common/autotest_common.sh@10 -- # set +x 00:19:04.591 17:23:13 -- host/fio.sh@22 -- # nvmfpid=3042463 00:19:04.591 17:23:13 -- host/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:04.591 17:23:13 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:04.591 17:23:13 -- host/fio.sh@26 -- # waitforlisten 3042463 00:19:04.591 17:23:13 -- common/autotest_common.sh@817 -- # '[' -z 3042463 ']' 00:19:04.591 17:23:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.591 17:23:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:04.591 17:23:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.591 17:23:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:04.591 17:23:13 -- common/autotest_common.sh@10 -- # set +x 00:19:04.591 [2024-04-24 17:23:13.612228] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:19:04.591 [2024-04-24 17:23:13.612272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.591 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.591 [2024-04-24 17:23:13.668701] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:04.591 [2024-04-24 17:23:13.748721] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.591 [2024-04-24 17:23:13.748758] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.591 [2024-04-24 17:23:13.748764] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.591 [2024-04-24 17:23:13.748774] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.591 [2024-04-24 17:23:13.748779] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.591 [2024-04-24 17:23:13.748824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.591 [2024-04-24 17:23:13.748923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.591 [2024-04-24 17:23:13.749012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:04.591 [2024-04-24 17:23:13.749013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.241 17:23:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:05.241 17:23:14 -- common/autotest_common.sh@850 -- # return 0 00:19:05.241 17:23:14 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:05.241 17:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.241 17:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:05.241 [2024-04-24 17:23:14.440469] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dfcf60/0x1e01450) succeed. 00:19:05.499 [2024-04-24 17:23:14.450888] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1dfe550/0x1e42ae0) succeed. 00:19:05.499 17:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.499 17:23:14 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:19:05.499 17:23:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:05.499 17:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:05.499 17:23:14 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:05.499 17:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.499 17:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:05.499 Malloc1 00:19:05.499 17:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.499 17:23:14 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:05.499 17:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.499 17:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:05.499 17:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.499 17:23:14 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:05.499 17:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.499 17:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:05.499 17:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.499 17:23:14 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:05.499 17:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.499 17:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:05.499 [2024-04-24 17:23:14.656437] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:05.499 17:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.499 17:23:14 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:05.499 17:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.499 17:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:05.499 17:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.499 17:23:14 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:19:05.500 17:23:14 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:19:05.500 17:23:14 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:19:05.500 17:23:14 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:05.500 17:23:14 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:05.500 17:23:14 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:05.500 17:23:14 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:05.500 17:23:14 -- common/autotest_common.sh@1327 -- # shift 00:19:05.500 17:23:14 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:05.500 17:23:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.500 17:23:14 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:05.500 17:23:14 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:05.500 17:23:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:05.500 17:23:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:05.500 17:23:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:05.500 17:23:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.500 17:23:14 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:05.500 17:23:14 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:05.500 17:23:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:05.500 17:23:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:05.500 17:23:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:05.500 17:23:14 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:19:05.500 17:23:14 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:19:05.758 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:05.758 fio-3.35 00:19:05.758 Starting 1 thread 00:19:06.016 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.542 00:19:08.542 test: (groupid=0, jobs=1): err= 0: pid=3042624: Wed Apr 24 17:23:17 2024 00:19:08.542 read: IOPS=17.7k, BW=69.2MiB/s (72.5MB/s)(139MiB/2004msec) 00:19:08.542 slat (nsec): min=1388, max=31391, avg=1491.57, stdev=503.48 00:19:08.542 clat (usec): min=2588, max=6473, avg=3601.28, stdev=81.39 00:19:08.542 lat (usec): min=2611, max=6475, avg=3602.77, stdev=81.33 00:19:08.542 clat percentiles (usec): 00:19:08.542 | 1.00th=[ 3556], 5.00th=[ 3589], 10.00th=[ 3589], 20.00th=[ 3589], 00:19:08.542 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3589], 60.00th=[ 3589], 00:19:08.542 | 70.00th=[ 3621], 80.00th=[ 3621], 90.00th=[ 3621], 95.00th=[ 3621], 00:19:08.542 | 99.00th=[ 3654], 99.50th=[ 3687], 99.90th=[ 4686], 99.95th=[ 5604], 00:19:08.542 | 99.99th=[ 6456] 00:19:08.542 bw ( KiB/s): min=70208, max=71280, per=100.00%, avg=70856.00, stdev=457.10, samples=4 00:19:08.542 iops : min=17552, max=17820, avg=17714.00, stdev=114.27, samples=4 00:19:08.542 write: IOPS=17.7k, BW=69.2MiB/s (72.5MB/s)(139MiB/2004msec); 0 zone resets 00:19:08.542 slat (nsec): min=1448, max=23701, avg=1592.64, stdev=535.00 00:19:08.542 clat (usec): min=2622, max=6480, avg=3599.36, stdev=78.12 00:19:08.542 lat (usec): min=2634, max=6481, avg=3600.96, stdev=78.04 00:19:08.542 clat percentiles (usec): 00:19:08.542 | 1.00th=[ 3556], 5.00th=[ 3556], 10.00th=[ 3589], 20.00th=[ 3589], 00:19:08.542 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3589], 60.00th=[ 3589], 00:19:08.542 | 70.00th=[ 3621], 80.00th=[ 3621], 90.00th=[ 3621], 95.00th=[ 3621], 00:19:08.542 | 99.00th=[ 3654], 99.50th=[ 3687], 99.90th=[ 4621], 99.95th=[ 5538], 00:19:08.542 | 99.99th=[ 6390] 00:19:08.542 bw ( KiB/s): min=70104, max=71144, per=100.00%, avg=70848.00, stdev=497.85, samples=4 00:19:08.542 iops : min=17526, max=17786, avg=17712.00, stdev=124.46, samples=4 00:19:08.542 lat (msec) : 4=99.82%, 10=0.18% 00:19:08.542 cpu : usr=99.65%, sys=0.00%, ctx=16, majf=0, minf=3 00:19:08.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:08.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.542 issued rwts: total=35494,35485,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.542 00:19:08.542 Run status group 0 (all jobs): 00:19:08.542 READ: bw=69.2MiB/s (72.5MB/s), 69.2MiB/s-69.2MiB/s (72.5MB/s-72.5MB/s), io=139MiB (145MB), run=2004-2004msec 00:19:08.542 WRITE: bw=69.2MiB/s (72.5MB/s), 69.2MiB/s-69.2MiB/s (72.5MB/s-72.5MB/s), io=139MiB (145MB), run=2004-2004msec 00:19:08.542 17:23:17 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:19:08.542 17:23:17 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:19:08.542 17:23:17 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:08.542 17:23:17 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:08.542 17:23:17 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:08.542 17:23:17 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:08.542 17:23:17 -- common/autotest_common.sh@1327 -- # shift 00:19:08.542 17:23:17 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:08.542 17:23:17 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:08.542 17:23:17 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:08.542 17:23:17 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:08.542 17:23:17 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:08.542 17:23:17 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:08.542 17:23:17 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:08.542 17:23:17 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:08.542 17:23:17 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:08.542 17:23:17 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:08.542 17:23:17 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:08.542 17:23:17 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:08.542 17:23:17 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:08.542 17:23:17 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:19:08.542 17:23:17 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:19:08.542 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:08.542 fio-3.35 00:19:08.542 Starting 1 thread 00:19:08.542 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.073 00:19:11.073 test: (groupid=0, jobs=1): err= 0: pid=3042773: Wed Apr 24 17:23:20 2024 00:19:11.073 read: IOPS=13.0k, BW=203MiB/s (213MB/s)(401MiB/1976msec) 00:19:11.073 slat (nsec): min=2324, max=48420, avg=2653.02, stdev=1428.27 00:19:11.073 clat (usec): min=299, max=9001, avg=1769.42, stdev=1076.24 00:19:11.073 lat (usec): min=301, max=9018, avg=1772.07, stdev=1077.00 00:19:11.073 clat percentiles (usec): 00:19:11.073 | 1.00th=[ 586], 5.00th=[ 832], 10.00th=[ 988], 20.00th=[ 1156], 00:19:11.073 | 30.00th=[ 1287], 40.00th=[ 1385], 50.00th=[ 1500], 60.00th=[ 1631], 00:19:11.073 | 70.00th=[ 1811], 80.00th=[ 2057], 90.00th=[ 2507], 95.00th=[ 4146], 00:19:11.073 | 99.00th=[ 6783], 99.50th=[ 7177], 99.90th=[ 8586], 99.95th=[ 8848], 00:19:11.073 | 99.99th=[ 8979] 00:19:11.073 bw ( KiB/s): min=100736, max=102240, per=48.73%, avg=101280.00, stdev=659.44, samples=4 00:19:11.073 iops : min= 6296, max= 6390, avg=6330.00, stdev=41.21, samples=4 00:19:11.073 write: IOPS=7065, BW=110MiB/s (116MB/s)(206MiB/1864msec); 0 zone resets 00:19:11.073 slat (usec): min=27, max=120, avg=28.88, stdev= 6.51 00:19:11.073 clat (usec): min=4385, max=22382, avg=14283.48, stdev=1907.14 00:19:11.073 lat (usec): min=4415, max=22409, avg=14312.35, stdev=1906.59 00:19:11.073 clat percentiles (usec): 00:19:11.073 | 1.00th=[ 7635], 5.00th=[11600], 10.00th=[12256], 20.00th=[12911], 00:19:11.073 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14222], 60.00th=[14615], 00:19:11.073 | 70.00th=[15139], 80.00th=[15795], 90.00th=[16712], 95.00th=[17171], 00:19:11.073 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20579], 99.95th=[20841], 00:19:11.073 | 99.99th=[22152] 00:19:11.073 bw ( KiB/s): min=101152, max=106176, per=92.43%, avg=104488.00, stdev=2299.31, samples=4 00:19:11.073 iops : min= 6322, max= 6636, avg=6530.50, stdev=143.71, samples=4 00:19:11.073 lat (usec) : 500=0.27%, 750=1.94%, 1000=4.91% 00:19:11.073 lat (msec) : 2=44.45%, 4=11.15%, 10=3.94%, 20=33.27%, 50=0.07% 00:19:11.073 cpu : usr=97.41%, sys=0.95%, ctx=184, majf=0, minf=2 00:19:11.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:19:11.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.073 issued rwts: total=25669,13170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.073 00:19:11.073 Run status group 0 (all jobs): 00:19:11.073 READ: bw=203MiB/s (213MB/s), 203MiB/s-203MiB/s (213MB/s-213MB/s), io=401MiB (421MB), run=1976-1976msec 00:19:11.073 WRITE: bw=110MiB/s (116MB/s), 110MiB/s-110MiB/s (116MB/s-116MB/s), io=206MiB (216MB), run=1864-1864msec 00:19:11.073 17:23:20 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.073 17:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.073 17:23:20 -- common/autotest_common.sh@10 -- # set +x 00:19:11.073 17:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.073 17:23:20 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:19:11.073 17:23:20 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:19:11.073 17:23:20 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:19:11.073 17:23:20 -- host/fio.sh@84 -- # nvmftestfini 00:19:11.073 17:23:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:11.073 17:23:20 -- nvmf/common.sh@117 -- # sync 00:19:11.073 17:23:20 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:11.073 17:23:20 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:11.073 17:23:20 -- nvmf/common.sh@120 -- # set +e 00:19:11.073 17:23:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:11.073 17:23:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:11.073 rmmod nvme_rdma 00:19:11.073 rmmod nvme_fabrics 00:19:11.073 17:23:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:11.073 17:23:20 -- nvmf/common.sh@124 -- # set -e 00:19:11.073 17:23:20 -- nvmf/common.sh@125 -- # return 0 00:19:11.073 17:23:20 -- nvmf/common.sh@478 -- # '[' -n 3042463 ']' 00:19:11.073 17:23:20 -- nvmf/common.sh@479 -- # killprocess 3042463 00:19:11.073 17:23:20 -- common/autotest_common.sh@936 -- # '[' -z 3042463 ']' 00:19:11.073 17:23:20 -- common/autotest_common.sh@940 -- # kill -0 3042463 00:19:11.073 17:23:20 -- common/autotest_common.sh@941 -- # uname 00:19:11.073 17:23:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:11.073 17:23:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3042463 00:19:11.073 17:23:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:11.073 17:23:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:11.073 17:23:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3042463' 00:19:11.073 killing process with pid 3042463 00:19:11.073 17:23:20 -- common/autotest_common.sh@955 -- # kill 3042463 00:19:11.073 17:23:20 -- common/autotest_common.sh@960 -- # wait 3042463 00:19:11.331 17:23:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:11.331 17:23:20 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:11.331 00:19:11.331 real 0m12.253s 00:19:11.331 user 0m43.162s 00:19:11.331 sys 0m4.738s 00:19:11.331 17:23:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:11.331 17:23:20 -- common/autotest_common.sh@10 -- # set +x 00:19:11.331 ************************************ 00:19:11.331 END TEST nvmf_fio_host 00:19:11.331 ************************************ 00:19:11.331 17:23:20 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:19:11.331 17:23:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:11.331 17:23:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:11.331 17:23:20 -- common/autotest_common.sh@10 -- # set +x 00:19:11.590 ************************************ 00:19:11.590 START TEST nvmf_failover 00:19:11.590 ************************************ 00:19:11.590 17:23:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:19:11.590 * Looking for test storage... 00:19:11.590 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:11.590 17:23:20 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:11.590 17:23:20 -- nvmf/common.sh@7 -- # uname -s 00:19:11.590 17:23:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.590 17:23:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.590 17:23:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.590 17:23:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.590 17:23:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.590 17:23:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.590 17:23:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.590 17:23:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.590 17:23:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.590 17:23:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.590 17:23:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:11.590 17:23:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:19:11.590 17:23:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.590 17:23:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.590 17:23:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:11.590 17:23:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:11.590 17:23:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:11.590 17:23:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.590 17:23:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.590 17:23:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.590 17:23:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.590 17:23:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.590 17:23:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.590 17:23:20 -- paths/export.sh@5 -- # export PATH 00:19:11.590 17:23:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.590 17:23:20 -- nvmf/common.sh@47 -- # : 0 00:19:11.590 17:23:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:11.590 17:23:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:11.590 17:23:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.590 17:23:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.590 17:23:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.590 17:23:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:11.590 17:23:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:11.590 17:23:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:11.590 17:23:20 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:11.590 17:23:20 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:11.590 17:23:20 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:11.590 17:23:20 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:11.590 17:23:20 -- host/failover.sh@18 -- # nvmftestinit 00:19:11.590 17:23:20 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:11.590 17:23:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.590 17:23:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:11.590 17:23:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:11.590 17:23:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:11.590 17:23:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.590 17:23:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.590 17:23:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.590 17:23:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:11.590 17:23:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:11.590 17:23:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:11.590 17:23:20 -- common/autotest_common.sh@10 -- # set +x 00:19:16.850 17:23:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:16.850 17:23:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:16.850 17:23:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:16.850 17:23:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:16.850 17:23:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:16.850 17:23:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:16.850 17:23:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:16.850 17:23:25 -- nvmf/common.sh@295 -- # net_devs=() 00:19:16.850 17:23:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:16.850 17:23:25 -- nvmf/common.sh@296 -- # e810=() 00:19:16.850 17:23:25 -- nvmf/common.sh@296 -- # local -ga e810 00:19:16.850 17:23:25 -- nvmf/common.sh@297 -- # x722=() 00:19:16.850 17:23:25 -- nvmf/common.sh@297 -- # local -ga x722 00:19:16.850 17:23:25 -- nvmf/common.sh@298 -- # mlx=() 00:19:16.850 17:23:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:16.850 17:23:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.850 17:23:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.850 17:23:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.850 17:23:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.850 17:23:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.850 17:23:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.850 17:23:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.850 17:23:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.850 17:23:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.850 17:23:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.850 17:23:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.850 17:23:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:16.850 17:23:25 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:16.850 17:23:25 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:16.850 17:23:25 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:16.850 17:23:25 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:16.850 17:23:25 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:16.850 17:23:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:16.850 17:23:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.850 17:23:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:16.850 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:16.850 17:23:25 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:16.850 17:23:25 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:16.850 17:23:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:16.850 17:23:25 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:16.850 17:23:25 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:16.850 17:23:25 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:16.850 17:23:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.850 17:23:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:16.850 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:16.850 17:23:25 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:16.850 17:23:25 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:16.850 17:23:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:16.850 17:23:25 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:16.850 17:23:25 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:16.850 17:23:25 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:16.850 17:23:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:16.850 17:23:25 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:16.850 17:23:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.850 17:23:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.850 17:23:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:16.850 17:23:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.850 17:23:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:16.850 Found net devices under 0000:da:00.0: mlx_0_0 00:19:16.850 17:23:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.850 17:23:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.850 17:23:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.850 17:23:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:16.850 17:23:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.850 17:23:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:16.850 Found net devices under 0000:da:00.1: mlx_0_1 00:19:16.850 17:23:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.851 17:23:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:16.851 17:23:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:16.851 17:23:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:16.851 17:23:25 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:16.851 17:23:25 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:16.851 17:23:25 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:16.851 17:23:25 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:16.851 17:23:25 -- nvmf/common.sh@58 -- # uname 00:19:16.851 17:23:25 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:16.851 17:23:25 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:16.851 17:23:25 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:16.851 17:23:25 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:16.851 17:23:25 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:16.851 17:23:25 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:16.851 17:23:25 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:16.851 17:23:25 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:16.851 17:23:25 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:16.851 17:23:25 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:16.851 17:23:25 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:16.851 17:23:25 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:16.851 17:23:25 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:16.851 17:23:25 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:16.851 17:23:25 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:16.851 17:23:25 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:16.851 17:23:25 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:16.851 17:23:25 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.851 17:23:25 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:16.851 17:23:25 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:16.851 17:23:25 -- nvmf/common.sh@105 -- # continue 2 00:19:16.851 17:23:25 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:16.851 17:23:25 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.851 17:23:25 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:16.851 17:23:25 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.851 17:23:25 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:16.851 17:23:25 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:16.851 17:23:25 -- nvmf/common.sh@105 -- # continue 2 00:19:16.851 17:23:25 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:16.851 17:23:25 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:16.851 17:23:25 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:16.851 17:23:25 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:16.851 17:23:25 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:16.851 17:23:25 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:16.851 17:23:25 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:16.851 17:23:25 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:16.851 17:23:25 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:16.851 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:16.851 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:19:16.851 altname enp218s0f0np0 00:19:16.851 altname ens818f0np0 00:19:16.851 inet 192.168.100.8/24 scope global mlx_0_0 00:19:16.851 valid_lft forever preferred_lft forever 00:19:16.851 17:23:25 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:16.851 17:23:25 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:16.851 17:23:25 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:16.851 17:23:25 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:16.851 17:23:25 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:16.851 17:23:25 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:16.851 17:23:25 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:16.851 17:23:25 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:16.851 17:23:25 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:16.851 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:16.851 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:19:16.851 altname enp218s0f1np1 00:19:16.851 altname ens818f1np1 00:19:16.851 inet 192.168.100.9/24 scope global mlx_0_1 00:19:16.851 valid_lft forever preferred_lft forever 00:19:16.851 17:23:25 -- nvmf/common.sh@411 -- # return 0 00:19:16.851 17:23:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:16.851 17:23:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:16.851 17:23:25 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:16.851 17:23:25 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:16.851 17:23:25 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:16.851 17:23:25 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:16.851 17:23:25 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:16.851 17:23:25 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:16.851 17:23:25 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:16.851 17:23:25 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:16.851 17:23:25 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:16.851 17:23:25 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.851 17:23:25 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:16.851 17:23:25 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:16.851 17:23:25 -- nvmf/common.sh@105 -- # continue 2 00:19:16.851 17:23:25 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:16.851 17:23:25 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.851 17:23:25 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:16.851 17:23:25 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.851 17:23:25 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:16.851 17:23:25 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:16.851 17:23:25 -- nvmf/common.sh@105 -- # continue 2 00:19:16.851 17:23:25 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:16.851 17:23:25 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:16.851 17:23:25 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:16.851 17:23:25 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:16.851 17:23:25 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:16.851 17:23:25 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:16.851 17:23:25 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:16.851 17:23:25 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:16.851 17:23:25 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:16.851 17:23:25 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:16.851 17:23:25 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:16.851 17:23:25 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:16.851 17:23:25 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:16.851 192.168.100.9' 00:19:16.851 17:23:25 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:16.851 192.168.100.9' 00:19:16.851 17:23:25 -- nvmf/common.sh@446 -- # head -n 1 00:19:16.851 17:23:25 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:16.851 17:23:25 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:16.851 192.168.100.9' 00:19:16.851 17:23:25 -- nvmf/common.sh@447 -- # tail -n +2 00:19:16.851 17:23:25 -- nvmf/common.sh@447 -- # head -n 1 00:19:16.851 17:23:25 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:16.851 17:23:25 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:16.851 17:23:25 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:16.851 17:23:25 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:16.851 17:23:25 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:16.851 17:23:25 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:16.851 17:23:25 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:16.851 17:23:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:16.851 17:23:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:16.851 17:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:16.851 17:23:25 -- nvmf/common.sh@470 -- # nvmfpid=3045012 00:19:16.851 17:23:25 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:16.851 17:23:25 -- nvmf/common.sh@471 -- # waitforlisten 3045012 00:19:16.851 17:23:25 -- common/autotest_common.sh@817 -- # '[' -z 3045012 ']' 00:19:16.851 17:23:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.851 17:23:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:16.851 17:23:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.851 17:23:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:16.851 17:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:16.851 [2024-04-24 17:23:25.293099] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:19:16.851 [2024-04-24 17:23:25.293140] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.851 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.851 [2024-04-24 17:23:25.348141] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:16.851 [2024-04-24 17:23:25.416518] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.851 [2024-04-24 17:23:25.416560] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.851 [2024-04-24 17:23:25.416566] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.851 [2024-04-24 17:23:25.416572] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.851 [2024-04-24 17:23:25.416577] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.851 [2024-04-24 17:23:25.416685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.851 [2024-04-24 17:23:25.416770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.851 [2024-04-24 17:23:25.416770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.851 17:23:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:16.851 17:23:26 -- common/autotest_common.sh@850 -- # return 0 00:19:16.851 17:23:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:16.851 17:23:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:16.851 17:23:26 -- common/autotest_common.sh@10 -- # set +x 00:19:17.110 17:23:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.110 17:23:26 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:17.110 [2024-04-24 17:23:26.297708] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1652680/0x1656b70) succeed. 00:19:17.110 [2024-04-24 17:23:26.307919] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1653bd0/0x1698200) succeed. 00:19:17.368 17:23:26 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:17.368 Malloc0 00:19:17.626 17:23:26 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:17.626 17:23:26 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:17.884 17:23:26 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:17.884 [2024-04-24 17:23:27.094929] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:17.884 17:23:27 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:18.142 [2024-04-24 17:23:27.267237] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:18.142 17:23:27 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:19:18.401 [2024-04-24 17:23:27.439803] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:19:18.401 17:23:27 -- host/failover.sh@31 -- # bdevperf_pid=3045065 00:19:18.401 17:23:27 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:18.401 17:23:27 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:18.401 17:23:27 -- host/failover.sh@34 -- # waitforlisten 3045065 /var/tmp/bdevperf.sock 00:19:18.401 17:23:27 -- common/autotest_common.sh@817 -- # '[' -z 3045065 ']' 00:19:18.401 17:23:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.401 17:23:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:18.401 17:23:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.401 17:23:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:18.401 17:23:27 -- common/autotest_common.sh@10 -- # set +x 00:19:19.332 17:23:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:19.332 17:23:28 -- common/autotest_common.sh@850 -- # return 0 00:19:19.332 17:23:28 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:19.332 NVMe0n1 00:19:19.332 17:23:28 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:19.589 00:19:19.589 17:23:28 -- host/failover.sh@39 -- # run_test_pid=3045096 00:19:19.589 17:23:28 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:19.589 17:23:28 -- host/failover.sh@41 -- # sleep 1 00:19:20.961 17:23:29 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:20.961 17:23:29 -- host/failover.sh@45 -- # sleep 3 00:19:24.240 17:23:32 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:24.240 00:19:24.240 17:23:33 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:24.240 17:23:33 -- host/failover.sh@50 -- # sleep 3 00:19:27.522 17:23:36 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:27.522 [2024-04-24 17:23:36.575028] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:27.522 17:23:36 -- host/failover.sh@55 -- # sleep 1 00:19:28.479 17:23:37 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:19:28.736 17:23:37 -- host/failover.sh@59 -- # wait 3045096 00:19:35.321 0 00:19:35.321 17:23:43 -- host/failover.sh@61 -- # killprocess 3045065 00:19:35.321 17:23:43 -- common/autotest_common.sh@936 -- # '[' -z 3045065 ']' 00:19:35.321 17:23:43 -- common/autotest_common.sh@940 -- # kill -0 3045065 00:19:35.321 17:23:43 -- common/autotest_common.sh@941 -- # uname 00:19:35.321 17:23:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:35.321 17:23:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3045065 00:19:35.321 17:23:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:35.321 17:23:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:35.321 17:23:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3045065' 00:19:35.321 killing process with pid 3045065 00:19:35.321 17:23:43 -- common/autotest_common.sh@955 -- # kill 3045065 00:19:35.321 17:23:43 -- common/autotest_common.sh@960 -- # wait 3045065 00:19:35.321 17:23:44 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:35.321 [2024-04-24 17:23:27.511304] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:19:35.321 [2024-04-24 17:23:27.511357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045065 ] 00:19:35.321 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.321 [2024-04-24 17:23:27.565478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.321 [2024-04-24 17:23:27.637193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.321 Running I/O for 15 seconds... 00:19:35.321 [2024-04-24 17:23:30.969010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x186f00 00:19:35.321 [2024-04-24 17:23:30.969052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.321 [2024-04-24 17:23:30.969070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x186f00 00:19:35.321 [2024-04-24 17:23:30.969077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.321 [2024-04-24 17:23:30.969087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x186f00 00:19:35.321 [2024-04-24 17:23:30.969094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.321 [2024-04-24 17:23:30.969103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x186f00 00:19:35.321 [2024-04-24 17:23:30.969110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.321 [2024-04-24 17:23:30.969118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x186f00 00:19:35.321 [2024-04-24 17:23:30.969125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.321 [2024-04-24 17:23:30.969134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x186f00 00:19:35.321 [2024-04-24 17:23:30.969141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.321 [2024-04-24 17:23:30.969150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x186f00 00:19:35.321 [2024-04-24 17:23:30.969157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.321 [2024-04-24 17:23:30.969167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x186f00 00:19:35.321 [2024-04-24 17:23:30.969174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.321 [2024-04-24 17:23:30.969182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x186f00 00:19:35.321 [2024-04-24 17:23:30.969189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.321 [2024-04-24 17:23:30.969197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x186f00 00:19:35.321 [2024-04-24 17:23:30.969209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.321 [2024-04-24 17:23:30.969218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x186f00 00:19:35.321 [2024-04-24 17:23:30.969224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.322 [2024-04-24 17:23:30.969732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x186f00 00:19:35.322 [2024-04-24 17:23:30.969750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.969986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.969992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.970000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.970006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.970015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x186f00 00:19:35.323 [2024-04-24 17:23:30.970022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.970030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.323 [2024-04-24 17:23:30.970036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.970044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.323 [2024-04-24 17:23:30.970051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.970059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.323 [2024-04-24 17:23:30.970066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.970074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.323 [2024-04-24 17:23:30.970081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.970090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.323 [2024-04-24 17:23:30.970101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.970109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.323 [2024-04-24 17:23:30.970115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.970124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.323 [2024-04-24 17:23:30.970131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.970139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.323 [2024-04-24 17:23:30.970145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.970154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.323 [2024-04-24 17:23:30.970160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.970168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.323 [2024-04-24 17:23:30.970175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.970184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.323 [2024-04-24 17:23:30.970190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.323 [2024-04-24 17:23:30.970198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.324 [2024-04-24 17:23:30.970713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.324 [2024-04-24 17:23:30.970720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.970991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.970999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:30.971006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.972988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.325 [2024-04-24 17:23:30.973000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.325 [2024-04-24 17:23:30.973010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27144 len:8 PRP1 0x0 PRP2 0x0 00:19:35.325 [2024-04-24 17:23:30.973017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:30.973052] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a00 was disconnected and freed. reset controller. 00:19:35.325 [2024-04-24 17:23:30.973062] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:19:35.325 [2024-04-24 17:23:30.973069] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.325 [2024-04-24 17:23:30.975859] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.325 [2024-04-24 17:23:30.990542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.325 [2024-04-24 17:23:31.036309] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:35.325 [2024-04-24 17:23:34.402668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:34.402710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:34.402725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:34.402732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:34.402742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:34.402748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:34.402757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:34.402763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.325 [2024-04-24 17:23:34.402771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.325 [2024-04-24 17:23:34.402777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.402785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.402791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.402800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.402806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.402814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.402820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.402830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.402837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.402854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.402861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.402869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.402875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.402884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.402890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.402898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.402904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.402912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.402918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.402927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.402933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.402941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.402947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.402955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.402963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.402974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.402982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.402990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.402996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.403004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.403010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.403018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.403024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.403035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.403043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.403052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.403059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.403068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.403074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.403082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.403091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.403099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.403107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.403115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.403122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.403130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.403138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.403146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.326 [2024-04-24 17:23:34.403152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.326 [2024-04-24 17:23:34.403161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.327 [2024-04-24 17:23:34.403166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:119912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.327 [2024-04-24 17:23:34.403310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.327 [2024-04-24 17:23:34.403325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.327 [2024-04-24 17:23:34.403339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.327 [2024-04-24 17:23:34.403354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.327 [2024-04-24 17:23:34.403369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.327 [2024-04-24 17:23:34.403384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x186f00 00:19:35.327 [2024-04-24 17:23:34.403633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.327 [2024-04-24 17:23:34.403648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.327 [2024-04-24 17:23:34.403663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.327 [2024-04-24 17:23:34.403672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.403989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.403997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.404004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.404012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x186f00 00:19:35.328 [2024-04-24 17:23:34.404019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.404028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x186f00 00:19:35.328 [2024-04-24 17:23:34.404047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.404055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.404061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.404069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.404076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.404084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.404090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.404098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.404104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.404112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.404119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.404127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.404133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.404141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.404148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.404157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.404163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.404171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.404178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.404204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.404212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.404221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.404227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.328 [2024-04-24 17:23:34.404236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.328 [2024-04-24 17:23:34.404243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.329 [2024-04-24 17:23:34.404258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.329 [2024-04-24 17:23:34.404273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.329 [2024-04-24 17:23:34.404288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.329 [2024-04-24 17:23:34.404303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.329 [2024-04-24 17:23:34.404686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.404694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x186f00 00:19:35.329 [2024-04-24 17:23:34.404702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.406737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.329 [2024-04-24 17:23:34.406749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.329 [2024-04-24 17:23:34.406755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120288 len:8 PRP1 0x0 PRP2 0x0 00:19:35.329 [2024-04-24 17:23:34.406762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.329 [2024-04-24 17:23:34.406797] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4940 was disconnected and freed. reset controller. 00:19:35.329 [2024-04-24 17:23:34.406808] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:19:35.329 [2024-04-24 17:23:34.406814] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.329 [2024-04-24 17:23:34.409590] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.329 [2024-04-24 17:23:34.424198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.329 [2024-04-24 17:23:34.470240] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:35.330 [2024-04-24 17:23:38.772959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x186f00 00:19:35.330 [2024-04-24 17:23:38.773133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x186f00 00:19:35.330 [2024-04-24 17:23:38.773149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x186f00 00:19:35.330 [2024-04-24 17:23:38.773166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x186f00 00:19:35.330 [2024-04-24 17:23:38.773180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.330 [2024-04-24 17:23:38.773518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.330 [2024-04-24 17:23:38.773526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.331 [2024-04-24 17:23:38.773532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.331 [2024-04-24 17:23:38.773547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.331 [2024-04-24 17:23:38.773621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.331 [2024-04-24 17:23:38.773637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.331 [2024-04-24 17:23:38.773653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.331 [2024-04-24 17:23:38.773669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.331 [2024-04-24 17:23:38.773687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.331 [2024-04-24 17:23:38.773703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.331 [2024-04-24 17:23:38.773719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.331 [2024-04-24 17:23:38.773734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.331 [2024-04-24 17:23:38.773968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x186f00 00:19:35.331 [2024-04-24 17:23:38.773974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.773982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x186f00 00:19:35.332 [2024-04-24 17:23:38.773989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.773997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.332 [2024-04-24 17:23:38.774479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.332 [2024-04-24 17:23:38.774486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.333 [2024-04-24 17:23:38.774500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.333 [2024-04-24 17:23:38.774514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.333 [2024-04-24 17:23:38.774528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.333 [2024-04-24 17:23:38.774543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.333 [2024-04-24 17:23:38.774557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.333 [2024-04-24 17:23:38.774573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.774916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x186f00 00:19:35.333 [2024-04-24 17:23:38.774923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:d600 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.776879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.333 [2024-04-24 17:23:38.776891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.333 [2024-04-24 17:23:38.776898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95040 len:8 PRP1 0x0 PRP2 0x0 00:19:35.333 [2024-04-24 17:23:38.776904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.333 [2024-04-24 17:23:38.776940] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4940 was disconnected and freed. reset controller. 00:19:35.333 [2024-04-24 17:23:38.776948] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:19:35.333 [2024-04-24 17:23:38.776955] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.333 [2024-04-24 17:23:38.779730] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.333 [2024-04-24 17:23:38.794093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.333 [2024-04-24 17:23:38.842571] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:35.333 00:19:35.333 Latency(us) 00:19:35.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.333 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:35.333 Verification LBA range: start 0x0 length 0x4000 00:19:35.333 NVMe0n1 : 15.01 14442.08 56.41 341.01 0.00 8636.16 353.04 1018616.69 00:19:35.333 =================================================================================================================== 00:19:35.333 Total : 14442.08 56.41 341.01 0.00 8636.16 353.04 1018616.69 00:19:35.333 Received shutdown signal, test time was about 15.000000 seconds 00:19:35.333 00:19:35.334 Latency(us) 00:19:35.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.334 =================================================================================================================== 00:19:35.334 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.334 17:23:44 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:35.334 17:23:44 -- host/failover.sh@65 -- # count=3 00:19:35.334 17:23:44 -- host/failover.sh@67 -- # (( count != 3 )) 00:19:35.334 17:23:44 -- host/failover.sh@73 -- # bdevperf_pid=3045299 00:19:35.334 17:23:44 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:35.334 17:23:44 -- host/failover.sh@75 -- # waitforlisten 3045299 /var/tmp/bdevperf.sock 00:19:35.334 17:23:44 -- common/autotest_common.sh@817 -- # '[' -z 3045299 ']' 00:19:35.334 17:23:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.334 17:23:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:35.334 17:23:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.334 17:23:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:35.334 17:23:44 -- common/autotest_common.sh@10 -- # set +x 00:19:35.898 17:23:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:35.898 17:23:45 -- common/autotest_common.sh@850 -- # return 0 00:19:35.898 17:23:45 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:36.156 [2024-04-24 17:23:45.201554] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:36.156 17:23:45 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:19:36.156 [2024-04-24 17:23:45.374112] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:19:36.156 17:23:45 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:36.413 NVMe0n1 00:19:36.671 17:23:45 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:36.671 00:19:36.671 17:23:45 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:36.929 00:19:36.929 17:23:46 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:36.929 17:23:46 -- host/failover.sh@82 -- # grep -q NVMe0 00:19:37.186 17:23:46 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:37.443 17:23:46 -- host/failover.sh@87 -- # sleep 3 00:19:40.726 17:23:49 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:40.726 17:23:49 -- host/failover.sh@88 -- # grep -q NVMe0 00:19:40.726 17:23:49 -- host/failover.sh@90 -- # run_test_pid=3045379 00:19:40.726 17:23:49 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:40.726 17:23:49 -- host/failover.sh@92 -- # wait 3045379 00:19:41.661 0 00:19:41.661 17:23:50 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:41.661 [2024-04-24 17:23:44.260914] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:19:41.661 [2024-04-24 17:23:44.260961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045299 ] 00:19:41.661 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.661 [2024-04-24 17:23:44.314741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.661 [2024-04-24 17:23:44.381011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.661 [2024-04-24 17:23:46.434678] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:19:41.661 [2024-04-24 17:23:46.435278] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:41.661 [2024-04-24 17:23:46.435315] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:41.661 [2024-04-24 17:23:46.452636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:41.661 [2024-04-24 17:23:46.468485] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:41.661 Running I/O for 1 seconds... 00:19:41.661 00:19:41.661 Latency(us) 00:19:41.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.661 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:41.661 Verification LBA range: start 0x0 length 0x4000 00:19:41.661 NVMe0n1 : 1.01 18164.22 70.95 0.00 0.00 7007.57 2512.21 15291.73 00:19:41.661 =================================================================================================================== 00:19:41.661 Total : 18164.22 70.95 0.00 0.00 7007.57 2512.21 15291.73 00:19:41.661 17:23:50 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:41.661 17:23:50 -- host/failover.sh@95 -- # grep -q NVMe0 00:19:41.919 17:23:50 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:41.919 17:23:51 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:41.919 17:23:51 -- host/failover.sh@99 -- # grep -q NVMe0 00:19:42.177 17:23:51 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:42.436 17:23:51 -- host/failover.sh@101 -- # sleep 3 00:19:45.789 17:23:54 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:45.789 17:23:54 -- host/failover.sh@103 -- # grep -q NVMe0 00:19:45.789 17:23:54 -- host/failover.sh@108 -- # killprocess 3045299 00:19:45.789 17:23:54 -- common/autotest_common.sh@936 -- # '[' -z 3045299 ']' 00:19:45.789 17:23:54 -- common/autotest_common.sh@940 -- # kill -0 3045299 00:19:45.789 17:23:54 -- common/autotest_common.sh@941 -- # uname 00:19:45.789 17:23:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:45.789 17:23:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3045299 00:19:45.789 17:23:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:45.789 17:23:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:45.789 17:23:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3045299' 00:19:45.789 killing process with pid 3045299 00:19:45.789 17:23:54 -- common/autotest_common.sh@955 -- # kill 3045299 00:19:45.789 17:23:54 -- common/autotest_common.sh@960 -- # wait 3045299 00:19:45.789 17:23:54 -- host/failover.sh@110 -- # sync 00:19:45.789 17:23:54 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.048 17:23:55 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:46.048 17:23:55 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:46.048 17:23:55 -- host/failover.sh@116 -- # nvmftestfini 00:19:46.048 17:23:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:46.048 17:23:55 -- nvmf/common.sh@117 -- # sync 00:19:46.048 17:23:55 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:46.048 17:23:55 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:46.048 17:23:55 -- nvmf/common.sh@120 -- # set +e 00:19:46.048 17:23:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:46.048 17:23:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:46.048 rmmod nvme_rdma 00:19:46.048 rmmod nvme_fabrics 00:19:46.048 17:23:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:46.048 17:23:55 -- nvmf/common.sh@124 -- # set -e 00:19:46.048 17:23:55 -- nvmf/common.sh@125 -- # return 0 00:19:46.048 17:23:55 -- nvmf/common.sh@478 -- # '[' -n 3045012 ']' 00:19:46.048 17:23:55 -- nvmf/common.sh@479 -- # killprocess 3045012 00:19:46.048 17:23:55 -- common/autotest_common.sh@936 -- # '[' -z 3045012 ']' 00:19:46.048 17:23:55 -- common/autotest_common.sh@940 -- # kill -0 3045012 00:19:46.048 17:23:55 -- common/autotest_common.sh@941 -- # uname 00:19:46.048 17:23:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:46.048 17:23:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3045012 00:19:46.048 17:23:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:46.048 17:23:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:46.048 17:23:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3045012' 00:19:46.048 killing process with pid 3045012 00:19:46.048 17:23:55 -- common/autotest_common.sh@955 -- # kill 3045012 00:19:46.048 17:23:55 -- common/autotest_common.sh@960 -- # wait 3045012 00:19:46.306 17:23:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:46.306 17:23:55 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:46.306 00:19:46.306 real 0m34.896s 00:19:46.306 user 2m2.482s 00:19:46.306 sys 0m5.263s 00:19:46.306 17:23:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:46.306 17:23:55 -- common/autotest_common.sh@10 -- # set +x 00:19:46.306 ************************************ 00:19:46.306 END TEST nvmf_failover 00:19:46.306 ************************************ 00:19:46.306 17:23:55 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:19:46.306 17:23:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:46.306 17:23:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:46.306 17:23:55 -- common/autotest_common.sh@10 -- # set +x 00:19:46.565 ************************************ 00:19:46.565 START TEST nvmf_discovery 00:19:46.565 ************************************ 00:19:46.565 17:23:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:19:46.565 * Looking for test storage... 00:19:46.565 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:46.565 17:23:55 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:46.565 17:23:55 -- nvmf/common.sh@7 -- # uname -s 00:19:46.565 17:23:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.565 17:23:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.565 17:23:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.565 17:23:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.565 17:23:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.565 17:23:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.565 17:23:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.565 17:23:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.565 17:23:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.565 17:23:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.565 17:23:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:46.565 17:23:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:19:46.565 17:23:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.565 17:23:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.565 17:23:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:46.565 17:23:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.565 17:23:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:46.565 17:23:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.565 17:23:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.565 17:23:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.565 17:23:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.565 17:23:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.565 17:23:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.565 17:23:55 -- paths/export.sh@5 -- # export PATH 00:19:46.565 17:23:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.565 17:23:55 -- nvmf/common.sh@47 -- # : 0 00:19:46.565 17:23:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:46.565 17:23:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:46.565 17:23:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.565 17:23:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.565 17:23:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.565 17:23:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:46.565 17:23:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:46.565 17:23:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:46.565 17:23:55 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:19:46.565 17:23:55 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:19:46.565 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:19:46.565 17:23:55 -- host/discovery.sh@13 -- # exit 0 00:19:46.565 00:19:46.565 real 0m0.091s 00:19:46.565 user 0m0.038s 00:19:46.565 sys 0m0.060s 00:19:46.565 17:23:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:46.565 17:23:55 -- common/autotest_common.sh@10 -- # set +x 00:19:46.565 ************************************ 00:19:46.565 END TEST nvmf_discovery 00:19:46.565 ************************************ 00:19:46.565 17:23:55 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:19:46.565 17:23:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:46.565 17:23:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:46.565 17:23:55 -- common/autotest_common.sh@10 -- # set +x 00:19:46.824 ************************************ 00:19:46.824 START TEST nvmf_discovery_remove_ifc 00:19:46.824 ************************************ 00:19:46.824 17:23:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:19:46.824 * Looking for test storage... 00:19:46.825 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:46.825 17:23:55 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:46.825 17:23:55 -- nvmf/common.sh@7 -- # uname -s 00:19:46.825 17:23:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.825 17:23:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.825 17:23:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.825 17:23:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.825 17:23:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.825 17:23:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.825 17:23:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.825 17:23:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.825 17:23:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.825 17:23:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.825 17:23:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:46.825 17:23:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:19:46.825 17:23:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.825 17:23:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.825 17:23:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:46.825 17:23:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.825 17:23:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:46.825 17:23:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.825 17:23:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.825 17:23:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.825 17:23:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.825 17:23:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.825 17:23:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.825 17:23:55 -- paths/export.sh@5 -- # export PATH 00:19:46.825 17:23:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.825 17:23:55 -- nvmf/common.sh@47 -- # : 0 00:19:46.825 17:23:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:46.825 17:23:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:46.825 17:23:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.825 17:23:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.825 17:23:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.825 17:23:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:46.825 17:23:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:46.825 17:23:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:46.825 17:23:55 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:19:46.825 17:23:55 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:19:46.825 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:19:46.825 17:23:55 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:19:46.825 00:19:46.825 real 0m0.087s 00:19:46.825 user 0m0.027s 00:19:46.825 sys 0m0.065s 00:19:46.825 17:23:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:46.825 17:23:55 -- common/autotest_common.sh@10 -- # set +x 00:19:46.825 ************************************ 00:19:46.825 END TEST nvmf_discovery_remove_ifc 00:19:46.825 ************************************ 00:19:46.825 17:23:55 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:19:46.825 17:23:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:46.825 17:23:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:46.825 17:23:55 -- common/autotest_common.sh@10 -- # set +x 00:19:46.825 ************************************ 00:19:46.825 START TEST nvmf_identify_kernel_target 00:19:46.825 ************************************ 00:19:46.825 17:23:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:19:47.083 * Looking for test storage... 00:19:47.083 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:47.083 17:23:56 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.083 17:23:56 -- nvmf/common.sh@7 -- # uname -s 00:19:47.083 17:23:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.083 17:23:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.083 17:23:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.083 17:23:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.083 17:23:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.083 17:23:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.083 17:23:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.083 17:23:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.083 17:23:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.084 17:23:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.084 17:23:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:47.084 17:23:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:19:47.084 17:23:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.084 17:23:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.084 17:23:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.084 17:23:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.084 17:23:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:47.084 17:23:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.084 17:23:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.084 17:23:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.084 17:23:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.084 17:23:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.084 17:23:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.084 17:23:56 -- paths/export.sh@5 -- # export PATH 00:19:47.084 17:23:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.084 17:23:56 -- nvmf/common.sh@47 -- # : 0 00:19:47.084 17:23:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:47.084 17:23:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:47.084 17:23:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.084 17:23:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.084 17:23:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.084 17:23:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:47.084 17:23:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:47.084 17:23:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:47.084 17:23:56 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:47.084 17:23:56 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:47.084 17:23:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.084 17:23:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:47.084 17:23:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:47.084 17:23:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:47.084 17:23:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.084 17:23:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.084 17:23:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.084 17:23:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:47.084 17:23:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:47.084 17:23:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:47.084 17:23:56 -- common/autotest_common.sh@10 -- # set +x 00:19:52.347 17:24:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:52.347 17:24:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.347 17:24:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.347 17:24:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.347 17:24:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.347 17:24:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.347 17:24:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.347 17:24:01 -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.347 17:24:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.347 17:24:01 -- nvmf/common.sh@296 -- # e810=() 00:19:52.347 17:24:01 -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.347 17:24:01 -- nvmf/common.sh@297 -- # x722=() 00:19:52.347 17:24:01 -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.347 17:24:01 -- nvmf/common.sh@298 -- # mlx=() 00:19:52.347 17:24:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.347 17:24:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.347 17:24:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.347 17:24:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.347 17:24:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.347 17:24:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.347 17:24:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.347 17:24:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.347 17:24:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.348 17:24:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.348 17:24:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.348 17:24:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.348 17:24:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.348 17:24:01 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:52.348 17:24:01 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:52.348 17:24:01 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:52.348 17:24:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.348 17:24:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.348 17:24:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:52.348 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:52.348 17:24:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.348 17:24:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.348 17:24:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:52.348 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:52.348 17:24:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.348 17:24:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.348 17:24:01 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.348 17:24:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.348 17:24:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:52.348 17:24:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.348 17:24:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:52.348 Found net devices under 0000:da:00.0: mlx_0_0 00:19:52.348 17:24:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.348 17:24:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.348 17:24:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.348 17:24:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:52.348 17:24:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.348 17:24:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:52.348 Found net devices under 0000:da:00.1: mlx_0_1 00:19:52.348 17:24:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.348 17:24:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:52.348 17:24:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:52.348 17:24:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:52.348 17:24:01 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:52.348 17:24:01 -- nvmf/common.sh@58 -- # uname 00:19:52.348 17:24:01 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:52.348 17:24:01 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:52.348 17:24:01 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:52.348 17:24:01 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:52.348 17:24:01 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:52.348 17:24:01 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:52.348 17:24:01 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:52.348 17:24:01 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:52.348 17:24:01 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:52.348 17:24:01 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:52.348 17:24:01 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:52.348 17:24:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:52.348 17:24:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:52.348 17:24:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:52.348 17:24:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:52.348 17:24:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:52.348 17:24:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.348 17:24:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.348 17:24:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:52.348 17:24:01 -- nvmf/common.sh@105 -- # continue 2 00:19:52.348 17:24:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.348 17:24:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.348 17:24:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.348 17:24:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:52.348 17:24:01 -- nvmf/common.sh@105 -- # continue 2 00:19:52.348 17:24:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:52.348 17:24:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:52.348 17:24:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:52.348 17:24:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:52.348 17:24:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.348 17:24:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.348 17:24:01 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:52.348 17:24:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:52.348 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:52.348 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:19:52.348 altname enp218s0f0np0 00:19:52.348 altname ens818f0np0 00:19:52.348 inet 192.168.100.8/24 scope global mlx_0_0 00:19:52.348 valid_lft forever preferred_lft forever 00:19:52.348 17:24:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:52.348 17:24:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:52.348 17:24:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:52.348 17:24:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:52.348 17:24:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.348 17:24:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.348 17:24:01 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:52.348 17:24:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:52.348 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:52.348 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:19:52.348 altname enp218s0f1np1 00:19:52.348 altname ens818f1np1 00:19:52.348 inet 192.168.100.9/24 scope global mlx_0_1 00:19:52.348 valid_lft forever preferred_lft forever 00:19:52.348 17:24:01 -- nvmf/common.sh@411 -- # return 0 00:19:52.348 17:24:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:52.348 17:24:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:52.348 17:24:01 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:52.348 17:24:01 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:52.348 17:24:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:52.348 17:24:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:52.348 17:24:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:52.348 17:24:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:52.348 17:24:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:52.348 17:24:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.348 17:24:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.348 17:24:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:52.348 17:24:01 -- nvmf/common.sh@105 -- # continue 2 00:19:52.348 17:24:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.348 17:24:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.348 17:24:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.348 17:24:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:52.348 17:24:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:52.348 17:24:01 -- nvmf/common.sh@105 -- # continue 2 00:19:52.348 17:24:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:52.348 17:24:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:52.348 17:24:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:52.348 17:24:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:52.348 17:24:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.348 17:24:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.348 17:24:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:52.348 17:24:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:52.348 17:24:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:52.348 17:24:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:52.348 17:24:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.348 17:24:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.348 17:24:01 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:52.348 192.168.100.9' 00:19:52.349 17:24:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:52.349 192.168.100.9' 00:19:52.349 17:24:01 -- nvmf/common.sh@446 -- # head -n 1 00:19:52.349 17:24:01 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:52.349 17:24:01 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:52.349 192.168.100.9' 00:19:52.349 17:24:01 -- nvmf/common.sh@447 -- # tail -n +2 00:19:52.349 17:24:01 -- nvmf/common.sh@447 -- # head -n 1 00:19:52.349 17:24:01 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:52.349 17:24:01 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:52.349 17:24:01 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:52.349 17:24:01 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:52.349 17:24:01 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:52.349 17:24:01 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:52.349 17:24:01 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:52.349 17:24:01 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:52.349 17:24:01 -- nvmf/common.sh@717 -- # local ip 00:19:52.349 17:24:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:52.349 17:24:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:52.349 17:24:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.349 17:24:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.349 17:24:01 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:52.349 17:24:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:52.349 17:24:01 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:52.349 17:24:01 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:52.349 17:24:01 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:52.349 17:24:01 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:19:52.349 17:24:01 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:19:52.349 17:24:01 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:19:52.349 17:24:01 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:19:52.349 17:24:01 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:52.349 17:24:01 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:52.349 17:24:01 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:52.349 17:24:01 -- nvmf/common.sh@628 -- # local block nvme 00:19:52.349 17:24:01 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:19:52.349 17:24:01 -- nvmf/common.sh@631 -- # modprobe nvmet 00:19:52.349 17:24:01 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:52.349 17:24:01 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:19:54.875 Waiting for block devices as requested 00:19:54.875 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:19:54.875 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:54.875 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:54.875 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:54.875 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:54.875 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:54.875 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:55.132 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:55.132 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:55.132 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:55.132 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:55.390 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:55.390 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:55.390 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:55.648 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:55.648 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:55.648 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:55.648 17:24:04 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:19:55.648 17:24:04 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:55.648 17:24:04 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:19:55.648 17:24:04 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:55.648 17:24:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:55.648 17:24:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:55.648 17:24:04 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:19:55.648 17:24:04 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:55.648 17:24:04 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:55.906 No valid GPT data, bailing 00:19:55.906 17:24:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:55.906 17:24:04 -- scripts/common.sh@391 -- # pt= 00:19:55.906 17:24:04 -- scripts/common.sh@392 -- # return 1 00:19:55.906 17:24:04 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:19:55.906 17:24:04 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:19:55.906 17:24:04 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:55.906 17:24:04 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:55.906 17:24:04 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:55.906 17:24:04 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:55.906 17:24:04 -- nvmf/common.sh@656 -- # echo 1 00:19:55.906 17:24:04 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:19:55.906 17:24:04 -- nvmf/common.sh@658 -- # echo 1 00:19:55.906 17:24:04 -- nvmf/common.sh@660 -- # echo 192.168.100.8 00:19:55.906 17:24:04 -- nvmf/common.sh@661 -- # echo rdma 00:19:55.906 17:24:04 -- nvmf/common.sh@662 -- # echo 4420 00:19:55.906 17:24:04 -- nvmf/common.sh@663 -- # echo ipv4 00:19:55.906 17:24:04 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:55.906 17:24:04 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:19:55.906 00:19:55.906 Discovery Log Number of Records 2, Generation counter 2 00:19:55.906 =====Discovery Log Entry 0====== 00:19:55.906 trtype: rdma 00:19:55.906 adrfam: ipv4 00:19:55.906 subtype: current discovery subsystem 00:19:55.906 treq: not specified, sq flow control disable supported 00:19:55.906 portid: 1 00:19:55.906 trsvcid: 4420 00:19:55.906 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:55.906 traddr: 192.168.100.8 00:19:55.906 eflags: none 00:19:55.906 rdma_prtype: not specified 00:19:55.906 rdma_qptype: connected 00:19:55.906 rdma_cms: rdma-cm 00:19:55.906 rdma_pkey: 0x0000 00:19:55.906 =====Discovery Log Entry 1====== 00:19:55.906 trtype: rdma 00:19:55.906 adrfam: ipv4 00:19:55.906 subtype: nvme subsystem 00:19:55.906 treq: not specified, sq flow control disable supported 00:19:55.906 portid: 1 00:19:55.906 trsvcid: 4420 00:19:55.906 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:55.906 traddr: 192.168.100.8 00:19:55.906 eflags: none 00:19:55.906 rdma_prtype: not specified 00:19:55.906 rdma_qptype: connected 00:19:55.906 rdma_cms: rdma-cm 00:19:55.906 rdma_pkey: 0x0000 00:19:55.906 17:24:05 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:19:55.906 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:56.166 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.166 ===================================================== 00:19:56.166 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:56.166 ===================================================== 00:19:56.166 Controller Capabilities/Features 00:19:56.166 ================================ 00:19:56.166 Vendor ID: 0000 00:19:56.166 Subsystem Vendor ID: 0000 00:19:56.166 Serial Number: 0f13dbb7b36da424f834 00:19:56.166 Model Number: Linux 00:19:56.166 Firmware Version: 6.7.0-68 00:19:56.166 Recommended Arb Burst: 0 00:19:56.166 IEEE OUI Identifier: 00 00 00 00:19:56.166 Multi-path I/O 00:19:56.166 May have multiple subsystem ports: No 00:19:56.166 May have multiple controllers: No 00:19:56.166 Associated with SR-IOV VF: No 00:19:56.166 Max Data Transfer Size: Unlimited 00:19:56.166 Max Number of Namespaces: 0 00:19:56.166 Max Number of I/O Queues: 1024 00:19:56.166 NVMe Specification Version (VS): 1.3 00:19:56.166 NVMe Specification Version (Identify): 1.3 00:19:56.166 Maximum Queue Entries: 128 00:19:56.166 Contiguous Queues Required: No 00:19:56.166 Arbitration Mechanisms Supported 00:19:56.166 Weighted Round Robin: Not Supported 00:19:56.166 Vendor Specific: Not Supported 00:19:56.166 Reset Timeout: 7500 ms 00:19:56.166 Doorbell Stride: 4 bytes 00:19:56.166 NVM Subsystem Reset: Not Supported 00:19:56.166 Command Sets Supported 00:19:56.166 NVM Command Set: Supported 00:19:56.166 Boot Partition: Not Supported 00:19:56.166 Memory Page Size Minimum: 4096 bytes 00:19:56.166 Memory Page Size Maximum: 4096 bytes 00:19:56.166 Persistent Memory Region: Not Supported 00:19:56.166 Optional Asynchronous Events Supported 00:19:56.166 Namespace Attribute Notices: Not Supported 00:19:56.166 Firmware Activation Notices: Not Supported 00:19:56.166 ANA Change Notices: Not Supported 00:19:56.166 PLE Aggregate Log Change Notices: Not Supported 00:19:56.166 LBA Status Info Alert Notices: Not Supported 00:19:56.166 EGE Aggregate Log Change Notices: Not Supported 00:19:56.166 Normal NVM Subsystem Shutdown event: Not Supported 00:19:56.166 Zone Descriptor Change Notices: Not Supported 00:19:56.166 Discovery Log Change Notices: Supported 00:19:56.166 Controller Attributes 00:19:56.166 128-bit Host Identifier: Not Supported 00:19:56.166 Non-Operational Permissive Mode: Not Supported 00:19:56.166 NVM Sets: Not Supported 00:19:56.166 Read Recovery Levels: Not Supported 00:19:56.166 Endurance Groups: Not Supported 00:19:56.166 Predictable Latency Mode: Not Supported 00:19:56.166 Traffic Based Keep ALive: Not Supported 00:19:56.166 Namespace Granularity: Not Supported 00:19:56.166 SQ Associations: Not Supported 00:19:56.166 UUID List: Not Supported 00:19:56.166 Multi-Domain Subsystem: Not Supported 00:19:56.166 Fixed Capacity Management: Not Supported 00:19:56.166 Variable Capacity Management: Not Supported 00:19:56.166 Delete Endurance Group: Not Supported 00:19:56.166 Delete NVM Set: Not Supported 00:19:56.166 Extended LBA Formats Supported: Not Supported 00:19:56.166 Flexible Data Placement Supported: Not Supported 00:19:56.166 00:19:56.166 Controller Memory Buffer Support 00:19:56.166 ================================ 00:19:56.166 Supported: No 00:19:56.166 00:19:56.166 Persistent Memory Region Support 00:19:56.166 ================================ 00:19:56.166 Supported: No 00:19:56.166 00:19:56.166 Admin Command Set Attributes 00:19:56.166 ============================ 00:19:56.166 Security Send/Receive: Not Supported 00:19:56.166 Format NVM: Not Supported 00:19:56.166 Firmware Activate/Download: Not Supported 00:19:56.166 Namespace Management: Not Supported 00:19:56.166 Device Self-Test: Not Supported 00:19:56.166 Directives: Not Supported 00:19:56.166 NVMe-MI: Not Supported 00:19:56.166 Virtualization Management: Not Supported 00:19:56.166 Doorbell Buffer Config: Not Supported 00:19:56.166 Get LBA Status Capability: Not Supported 00:19:56.166 Command & Feature Lockdown Capability: Not Supported 00:19:56.166 Abort Command Limit: 1 00:19:56.166 Async Event Request Limit: 1 00:19:56.166 Number of Firmware Slots: N/A 00:19:56.166 Firmware Slot 1 Read-Only: N/A 00:19:56.166 Firmware Activation Without Reset: N/A 00:19:56.166 Multiple Update Detection Support: N/A 00:19:56.166 Firmware Update Granularity: No Information Provided 00:19:56.166 Per-Namespace SMART Log: No 00:19:56.166 Asymmetric Namespace Access Log Page: Not Supported 00:19:56.166 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:56.166 Command Effects Log Page: Not Supported 00:19:56.166 Get Log Page Extended Data: Supported 00:19:56.166 Telemetry Log Pages: Not Supported 00:19:56.166 Persistent Event Log Pages: Not Supported 00:19:56.166 Supported Log Pages Log Page: May Support 00:19:56.166 Commands Supported & Effects Log Page: Not Supported 00:19:56.166 Feature Identifiers & Effects Log Page:May Support 00:19:56.166 NVMe-MI Commands & Effects Log Page: May Support 00:19:56.166 Data Area 4 for Telemetry Log: Not Supported 00:19:56.166 Error Log Page Entries Supported: 1 00:19:56.166 Keep Alive: Not Supported 00:19:56.166 00:19:56.166 NVM Command Set Attributes 00:19:56.166 ========================== 00:19:56.166 Submission Queue Entry Size 00:19:56.166 Max: 1 00:19:56.166 Min: 1 00:19:56.166 Completion Queue Entry Size 00:19:56.166 Max: 1 00:19:56.166 Min: 1 00:19:56.166 Number of Namespaces: 0 00:19:56.166 Compare Command: Not Supported 00:19:56.166 Write Uncorrectable Command: Not Supported 00:19:56.166 Dataset Management Command: Not Supported 00:19:56.166 Write Zeroes Command: Not Supported 00:19:56.166 Set Features Save Field: Not Supported 00:19:56.166 Reservations: Not Supported 00:19:56.166 Timestamp: Not Supported 00:19:56.166 Copy: Not Supported 00:19:56.166 Volatile Write Cache: Not Present 00:19:56.166 Atomic Write Unit (Normal): 1 00:19:56.166 Atomic Write Unit (PFail): 1 00:19:56.166 Atomic Compare & Write Unit: 1 00:19:56.166 Fused Compare & Write: Not Supported 00:19:56.166 Scatter-Gather List 00:19:56.166 SGL Command Set: Supported 00:19:56.166 SGL Keyed: Supported 00:19:56.166 SGL Bit Bucket Descriptor: Not Supported 00:19:56.166 SGL Metadata Pointer: Not Supported 00:19:56.166 Oversized SGL: Not Supported 00:19:56.166 SGL Metadata Address: Not Supported 00:19:56.166 SGL Offset: Supported 00:19:56.166 Transport SGL Data Block: Not Supported 00:19:56.166 Replay Protected Memory Block: Not Supported 00:19:56.166 00:19:56.166 Firmware Slot Information 00:19:56.166 ========================= 00:19:56.166 Active slot: 0 00:19:56.166 00:19:56.166 00:19:56.166 Error Log 00:19:56.166 ========= 00:19:56.166 00:19:56.166 Active Namespaces 00:19:56.166 ================= 00:19:56.166 Discovery Log Page 00:19:56.166 ================== 00:19:56.166 Generation Counter: 2 00:19:56.166 Number of Records: 2 00:19:56.166 Record Format: 0 00:19:56.166 00:19:56.166 Discovery Log Entry 0 00:19:56.166 ---------------------- 00:19:56.166 Transport Type: 1 (RDMA) 00:19:56.166 Address Family: 1 (IPv4) 00:19:56.166 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:56.166 Entry Flags: 00:19:56.166 Duplicate Returned Information: 0 00:19:56.166 Explicit Persistent Connection Support for Discovery: 0 00:19:56.166 Transport Requirements: 00:19:56.166 Secure Channel: Not Specified 00:19:56.166 Port ID: 1 (0x0001) 00:19:56.166 Controller ID: 65535 (0xffff) 00:19:56.166 Admin Max SQ Size: 32 00:19:56.166 Transport Service Identifier: 4420 00:19:56.166 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:56.166 Transport Address: 192.168.100.8 00:19:56.166 Transport Specific Address Subtype - RDMA 00:19:56.167 RDMA QP Service Type: 1 (Reliable Connected) 00:19:56.167 RDMA Provider Type: 1 (No provider specified) 00:19:56.167 RDMA CM Service: 1 (RDMA_CM) 00:19:56.167 Discovery Log Entry 1 00:19:56.167 ---------------------- 00:19:56.167 Transport Type: 1 (RDMA) 00:19:56.167 Address Family: 1 (IPv4) 00:19:56.167 Subsystem Type: 2 (NVM Subsystem) 00:19:56.167 Entry Flags: 00:19:56.167 Duplicate Returned Information: 0 00:19:56.167 Explicit Persistent Connection Support for Discovery: 0 00:19:56.167 Transport Requirements: 00:19:56.167 Secure Channel: Not Specified 00:19:56.167 Port ID: 1 (0x0001) 00:19:56.167 Controller ID: 65535 (0xffff) 00:19:56.167 Admin Max SQ Size: 32 00:19:56.167 Transport Service Identifier: 4420 00:19:56.167 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:56.167 Transport Address: 192.168.100.8 00:19:56.167 Transport Specific Address Subtype - RDMA 00:19:56.167 RDMA QP Service Type: 1 (Reliable Connected) 00:19:56.167 RDMA Provider Type: 1 (No provider specified) 00:19:56.167 RDMA CM Service: 1 (RDMA_CM) 00:19:56.167 17:24:05 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:56.167 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.167 get_feature(0x01) failed 00:19:56.167 get_feature(0x02) failed 00:19:56.167 get_feature(0x04) failed 00:19:56.167 ===================================================== 00:19:56.167 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:19:56.167 ===================================================== 00:19:56.167 Controller Capabilities/Features 00:19:56.167 ================================ 00:19:56.167 Vendor ID: 0000 00:19:56.167 Subsystem Vendor ID: 0000 00:19:56.167 Serial Number: e4ee9d4f53558bece36e 00:19:56.167 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:56.167 Firmware Version: 6.7.0-68 00:19:56.167 Recommended Arb Burst: 6 00:19:56.167 IEEE OUI Identifier: 00 00 00 00:19:56.167 Multi-path I/O 00:19:56.167 May have multiple subsystem ports: Yes 00:19:56.167 May have multiple controllers: Yes 00:19:56.167 Associated with SR-IOV VF: No 00:19:56.167 Max Data Transfer Size: 1048576 00:19:56.167 Max Number of Namespaces: 1024 00:19:56.167 Max Number of I/O Queues: 128 00:19:56.167 NVMe Specification Version (VS): 1.3 00:19:56.167 NVMe Specification Version (Identify): 1.3 00:19:56.167 Maximum Queue Entries: 128 00:19:56.167 Contiguous Queues Required: No 00:19:56.167 Arbitration Mechanisms Supported 00:19:56.167 Weighted Round Robin: Not Supported 00:19:56.167 Vendor Specific: Not Supported 00:19:56.167 Reset Timeout: 7500 ms 00:19:56.167 Doorbell Stride: 4 bytes 00:19:56.167 NVM Subsystem Reset: Not Supported 00:19:56.167 Command Sets Supported 00:19:56.167 NVM Command Set: Supported 00:19:56.167 Boot Partition: Not Supported 00:19:56.167 Memory Page Size Minimum: 4096 bytes 00:19:56.167 Memory Page Size Maximum: 4096 bytes 00:19:56.167 Persistent Memory Region: Not Supported 00:19:56.167 Optional Asynchronous Events Supported 00:19:56.167 Namespace Attribute Notices: Supported 00:19:56.167 Firmware Activation Notices: Not Supported 00:19:56.167 ANA Change Notices: Supported 00:19:56.167 PLE Aggregate Log Change Notices: Not Supported 00:19:56.167 LBA Status Info Alert Notices: Not Supported 00:19:56.167 EGE Aggregate Log Change Notices: Not Supported 00:19:56.167 Normal NVM Subsystem Shutdown event: Not Supported 00:19:56.167 Zone Descriptor Change Notices: Not Supported 00:19:56.167 Discovery Log Change Notices: Not Supported 00:19:56.167 Controller Attributes 00:19:56.167 128-bit Host Identifier: Supported 00:19:56.167 Non-Operational Permissive Mode: Not Supported 00:19:56.167 NVM Sets: Not Supported 00:19:56.167 Read Recovery Levels: Not Supported 00:19:56.167 Endurance Groups: Not Supported 00:19:56.167 Predictable Latency Mode: Not Supported 00:19:56.167 Traffic Based Keep ALive: Supported 00:19:56.167 Namespace Granularity: Not Supported 00:19:56.167 SQ Associations: Not Supported 00:19:56.167 UUID List: Not Supported 00:19:56.167 Multi-Domain Subsystem: Not Supported 00:19:56.167 Fixed Capacity Management: Not Supported 00:19:56.167 Variable Capacity Management: Not Supported 00:19:56.167 Delete Endurance Group: Not Supported 00:19:56.167 Delete NVM Set: Not Supported 00:19:56.167 Extended LBA Formats Supported: Not Supported 00:19:56.167 Flexible Data Placement Supported: Not Supported 00:19:56.167 00:19:56.167 Controller Memory Buffer Support 00:19:56.167 ================================ 00:19:56.167 Supported: No 00:19:56.167 00:19:56.167 Persistent Memory Region Support 00:19:56.167 ================================ 00:19:56.167 Supported: No 00:19:56.167 00:19:56.167 Admin Command Set Attributes 00:19:56.167 ============================ 00:19:56.167 Security Send/Receive: Not Supported 00:19:56.167 Format NVM: Not Supported 00:19:56.167 Firmware Activate/Download: Not Supported 00:19:56.167 Namespace Management: Not Supported 00:19:56.167 Device Self-Test: Not Supported 00:19:56.167 Directives: Not Supported 00:19:56.167 NVMe-MI: Not Supported 00:19:56.167 Virtualization Management: Not Supported 00:19:56.167 Doorbell Buffer Config: Not Supported 00:19:56.167 Get LBA Status Capability: Not Supported 00:19:56.167 Command & Feature Lockdown Capability: Not Supported 00:19:56.167 Abort Command Limit: 4 00:19:56.167 Async Event Request Limit: 4 00:19:56.167 Number of Firmware Slots: N/A 00:19:56.167 Firmware Slot 1 Read-Only: N/A 00:19:56.167 Firmware Activation Without Reset: N/A 00:19:56.167 Multiple Update Detection Support: N/A 00:19:56.167 Firmware Update Granularity: No Information Provided 00:19:56.167 Per-Namespace SMART Log: Yes 00:19:56.167 Asymmetric Namespace Access Log Page: Supported 00:19:56.167 ANA Transition Time : 10 sec 00:19:56.167 00:19:56.167 Asymmetric Namespace Access Capabilities 00:19:56.167 ANA Optimized State : Supported 00:19:56.167 ANA Non-Optimized State : Supported 00:19:56.167 ANA Inaccessible State : Supported 00:19:56.167 ANA Persistent Loss State : Supported 00:19:56.167 ANA Change State : Supported 00:19:56.167 ANAGRPID is not changed : No 00:19:56.167 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:56.167 00:19:56.167 ANA Group Identifier Maximum : 128 00:19:56.167 Number of ANA Group Identifiers : 128 00:19:56.167 Max Number of Allowed Namespaces : 1024 00:19:56.167 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:56.167 Command Effects Log Page: Supported 00:19:56.167 Get Log Page Extended Data: Supported 00:19:56.167 Telemetry Log Pages: Not Supported 00:19:56.167 Persistent Event Log Pages: Not Supported 00:19:56.167 Supported Log Pages Log Page: May Support 00:19:56.167 Commands Supported & Effects Log Page: Not Supported 00:19:56.167 Feature Identifiers & Effects Log Page:May Support 00:19:56.167 NVMe-MI Commands & Effects Log Page: May Support 00:19:56.167 Data Area 4 for Telemetry Log: Not Supported 00:19:56.167 Error Log Page Entries Supported: 128 00:19:56.167 Keep Alive: Supported 00:19:56.167 Keep Alive Granularity: 1000 ms 00:19:56.167 00:19:56.167 NVM Command Set Attributes 00:19:56.167 ========================== 00:19:56.167 Submission Queue Entry Size 00:19:56.167 Max: 64 00:19:56.167 Min: 64 00:19:56.167 Completion Queue Entry Size 00:19:56.167 Max: 16 00:19:56.167 Min: 16 00:19:56.167 Number of Namespaces: 1024 00:19:56.167 Compare Command: Not Supported 00:19:56.167 Write Uncorrectable Command: Not Supported 00:19:56.167 Dataset Management Command: Supported 00:19:56.167 Write Zeroes Command: Supported 00:19:56.167 Set Features Save Field: Not Supported 00:19:56.167 Reservations: Not Supported 00:19:56.167 Timestamp: Not Supported 00:19:56.167 Copy: Not Supported 00:19:56.167 Volatile Write Cache: Present 00:19:56.167 Atomic Write Unit (Normal): 1 00:19:56.167 Atomic Write Unit (PFail): 1 00:19:56.167 Atomic Compare & Write Unit: 1 00:19:56.167 Fused Compare & Write: Not Supported 00:19:56.167 Scatter-Gather List 00:19:56.167 SGL Command Set: Supported 00:19:56.167 SGL Keyed: Supported 00:19:56.167 SGL Bit Bucket Descriptor: Not Supported 00:19:56.167 SGL Metadata Pointer: Not Supported 00:19:56.167 Oversized SGL: Not Supported 00:19:56.167 SGL Metadata Address: Not Supported 00:19:56.167 SGL Offset: Supported 00:19:56.167 Transport SGL Data Block: Not Supported 00:19:56.167 Replay Protected Memory Block: Not Supported 00:19:56.167 00:19:56.167 Firmware Slot Information 00:19:56.167 ========================= 00:19:56.167 Active slot: 0 00:19:56.167 00:19:56.167 Asymmetric Namespace Access 00:19:56.167 =========================== 00:19:56.167 Change Count : 0 00:19:56.167 Number of ANA Group Descriptors : 1 00:19:56.167 ANA Group Descriptor : 0 00:19:56.167 ANA Group ID : 1 00:19:56.167 Number of NSID Values : 1 00:19:56.167 Change Count : 0 00:19:56.167 ANA State : 1 00:19:56.168 Namespace Identifier : 1 00:19:56.168 00:19:56.168 Commands Supported and Effects 00:19:56.168 ============================== 00:19:56.168 Admin Commands 00:19:56.168 -------------- 00:19:56.168 Get Log Page (02h): Supported 00:19:56.168 Identify (06h): Supported 00:19:56.168 Abort (08h): Supported 00:19:56.168 Set Features (09h): Supported 00:19:56.168 Get Features (0Ah): Supported 00:19:56.168 Asynchronous Event Request (0Ch): Supported 00:19:56.168 Keep Alive (18h): Supported 00:19:56.168 I/O Commands 00:19:56.168 ------------ 00:19:56.168 Flush (00h): Supported 00:19:56.168 Write (01h): Supported LBA-Change 00:19:56.168 Read (02h): Supported 00:19:56.168 Write Zeroes (08h): Supported LBA-Change 00:19:56.168 Dataset Management (09h): Supported 00:19:56.168 00:19:56.168 Error Log 00:19:56.168 ========= 00:19:56.168 Entry: 0 00:19:56.168 Error Count: 0x3 00:19:56.168 Submission Queue Id: 0x0 00:19:56.168 Command Id: 0x5 00:19:56.168 Phase Bit: 0 00:19:56.168 Status Code: 0x2 00:19:56.168 Status Code Type: 0x0 00:19:56.168 Do Not Retry: 1 00:19:56.168 Error Location: 0x28 00:19:56.168 LBA: 0x0 00:19:56.168 Namespace: 0x0 00:19:56.168 Vendor Log Page: 0x0 00:19:56.168 ----------- 00:19:56.168 Entry: 1 00:19:56.168 Error Count: 0x2 00:19:56.168 Submission Queue Id: 0x0 00:19:56.168 Command Id: 0x5 00:19:56.168 Phase Bit: 0 00:19:56.168 Status Code: 0x2 00:19:56.168 Status Code Type: 0x0 00:19:56.168 Do Not Retry: 1 00:19:56.168 Error Location: 0x28 00:19:56.168 LBA: 0x0 00:19:56.168 Namespace: 0x0 00:19:56.168 Vendor Log Page: 0x0 00:19:56.168 ----------- 00:19:56.168 Entry: 2 00:19:56.168 Error Count: 0x1 00:19:56.168 Submission Queue Id: 0x0 00:19:56.168 Command Id: 0x0 00:19:56.168 Phase Bit: 0 00:19:56.168 Status Code: 0x2 00:19:56.168 Status Code Type: 0x0 00:19:56.168 Do Not Retry: 1 00:19:56.168 Error Location: 0x28 00:19:56.168 LBA: 0x0 00:19:56.168 Namespace: 0x0 00:19:56.168 Vendor Log Page: 0x0 00:19:56.168 00:19:56.168 Number of Queues 00:19:56.168 ================ 00:19:56.168 Number of I/O Submission Queues: 128 00:19:56.168 Number of I/O Completion Queues: 128 00:19:56.168 00:19:56.168 ZNS Specific Controller Data 00:19:56.168 ============================ 00:19:56.168 Zone Append Size Limit: 0 00:19:56.168 00:19:56.168 00:19:56.168 Active Namespaces 00:19:56.168 ================= 00:19:56.168 get_feature(0x05) failed 00:19:56.168 Namespace ID:1 00:19:56.168 Command Set Identifier: NVM (00h) 00:19:56.168 Deallocate: Supported 00:19:56.168 Deallocated/Unwritten Error: Not Supported 00:19:56.168 Deallocated Read Value: Unknown 00:19:56.168 Deallocate in Write Zeroes: Not Supported 00:19:56.168 Deallocated Guard Field: 0xFFFF 00:19:56.168 Flush: Supported 00:19:56.168 Reservation: Not Supported 00:19:56.168 Namespace Sharing Capabilities: Multiple Controllers 00:19:56.168 Size (in LBAs): 3125627568 (1490GiB) 00:19:56.168 Capacity (in LBAs): 3125627568 (1490GiB) 00:19:56.168 Utilization (in LBAs): 3125627568 (1490GiB) 00:19:56.168 UUID: 589f7a36-c9a4-493b-9985-7b8d50a1af99 00:19:56.168 Thin Provisioning: Not Supported 00:19:56.168 Per-NS Atomic Units: Yes 00:19:56.168 Atomic Boundary Size (Normal): 0 00:19:56.168 Atomic Boundary Size (PFail): 0 00:19:56.168 Atomic Boundary Offset: 0 00:19:56.168 NGUID/EUI64 Never Reused: No 00:19:56.168 ANA group ID: 1 00:19:56.168 Namespace Write Protected: No 00:19:56.168 Number of LBA Formats: 1 00:19:56.168 Current LBA Format: LBA Format #00 00:19:56.168 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:56.168 00:19:56.168 17:24:05 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:56.168 17:24:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:56.168 17:24:05 -- nvmf/common.sh@117 -- # sync 00:19:56.168 17:24:05 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:56.168 17:24:05 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:56.168 17:24:05 -- nvmf/common.sh@120 -- # set +e 00:19:56.168 17:24:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:56.168 17:24:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:56.168 rmmod nvme_rdma 00:19:56.168 rmmod nvme_fabrics 00:19:56.168 17:24:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:56.168 17:24:05 -- nvmf/common.sh@124 -- # set -e 00:19:56.168 17:24:05 -- nvmf/common.sh@125 -- # return 0 00:19:56.168 17:24:05 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:56.168 17:24:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:56.168 17:24:05 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:56.168 17:24:05 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:56.168 17:24:05 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:56.168 17:24:05 -- nvmf/common.sh@675 -- # echo 0 00:19:56.426 17:24:05 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:56.426 17:24:05 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:56.426 17:24:05 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:56.426 17:24:05 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:56.426 17:24:05 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:19:56.426 17:24:05 -- nvmf/common.sh@684 -- # modprobe -r nvmet_rdma nvmet 00:19:56.426 17:24:05 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:19:58.956 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:19:58.956 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:19:58.956 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:19:58.956 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:19:58.956 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:19:58.956 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:19:58.956 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:19:58.956 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:19:58.956 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:19:58.957 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:19:58.957 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:19:58.957 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:19:58.957 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:19:58.957 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:19:58.957 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:19:58.957 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:00.859 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:20:00.859 00:20:00.859 real 0m13.682s 00:20:00.859 user 0m3.762s 00:20:00.859 sys 0m7.692s 00:20:00.859 17:24:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:00.859 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:20:00.859 ************************************ 00:20:00.859 END TEST nvmf_identify_kernel_target 00:20:00.859 ************************************ 00:20:00.859 17:24:09 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:20:00.859 17:24:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:00.859 17:24:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:00.859 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:20:00.859 ************************************ 00:20:00.859 START TEST nvmf_auth 00:20:00.859 ************************************ 00:20:00.859 17:24:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:20:00.859 * Looking for test storage... 00:20:00.859 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:00.859 17:24:09 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.859 17:24:09 -- nvmf/common.sh@7 -- # uname -s 00:20:00.859 17:24:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.859 17:24:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.859 17:24:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.859 17:24:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.859 17:24:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.859 17:24:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.859 17:24:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.859 17:24:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.859 17:24:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.859 17:24:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.859 17:24:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:00.859 17:24:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:20:00.859 17:24:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.859 17:24:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.859 17:24:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.859 17:24:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.859 17:24:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:00.859 17:24:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.859 17:24:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.859 17:24:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.859 17:24:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.860 17:24:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.860 17:24:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.860 17:24:09 -- paths/export.sh@5 -- # export PATH 00:20:00.860 17:24:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.860 17:24:09 -- nvmf/common.sh@47 -- # : 0 00:20:00.860 17:24:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.860 17:24:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.860 17:24:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.860 17:24:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.860 17:24:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.860 17:24:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.860 17:24:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.860 17:24:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.860 17:24:09 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:00.860 17:24:09 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:00.860 17:24:09 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:00.860 17:24:09 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:00.860 17:24:09 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:00.860 17:24:09 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:00.860 17:24:09 -- host/auth.sh@21 -- # keys=() 00:20:00.860 17:24:09 -- host/auth.sh@77 -- # nvmftestinit 00:20:00.860 17:24:09 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:00.860 17:24:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.860 17:24:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:00.860 17:24:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:00.860 17:24:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:00.860 17:24:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.860 17:24:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.860 17:24:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.860 17:24:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:00.860 17:24:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:00.860 17:24:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:00.860 17:24:09 -- common/autotest_common.sh@10 -- # set +x 00:20:06.125 17:24:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:06.125 17:24:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:06.125 17:24:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:06.125 17:24:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:06.125 17:24:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:06.125 17:24:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:06.125 17:24:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:06.125 17:24:15 -- nvmf/common.sh@295 -- # net_devs=() 00:20:06.125 17:24:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:06.125 17:24:15 -- nvmf/common.sh@296 -- # e810=() 00:20:06.125 17:24:15 -- nvmf/common.sh@296 -- # local -ga e810 00:20:06.125 17:24:15 -- nvmf/common.sh@297 -- # x722=() 00:20:06.125 17:24:15 -- nvmf/common.sh@297 -- # local -ga x722 00:20:06.125 17:24:15 -- nvmf/common.sh@298 -- # mlx=() 00:20:06.125 17:24:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:06.125 17:24:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.125 17:24:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.125 17:24:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.125 17:24:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.125 17:24:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.125 17:24:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.125 17:24:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.125 17:24:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.125 17:24:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.125 17:24:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.125 17:24:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.125 17:24:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:06.125 17:24:15 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:06.125 17:24:15 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:06.125 17:24:15 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:06.125 17:24:15 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:06.125 17:24:15 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:06.125 17:24:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:06.125 17:24:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.125 17:24:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:06.125 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:06.125 17:24:15 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:06.125 17:24:15 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:06.125 17:24:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:06.125 17:24:15 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:06.125 17:24:15 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:06.125 17:24:15 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:06.125 17:24:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.125 17:24:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:06.125 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:06.125 17:24:15 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:06.125 17:24:15 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:06.125 17:24:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:06.125 17:24:15 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:06.125 17:24:15 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:06.125 17:24:15 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:06.125 17:24:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:06.125 17:24:15 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:06.126 17:24:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.126 17:24:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.126 17:24:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:06.126 17:24:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.126 17:24:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:06.126 Found net devices under 0000:da:00.0: mlx_0_0 00:20:06.126 17:24:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.126 17:24:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.126 17:24:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.126 17:24:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:06.126 17:24:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.126 17:24:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:06.126 Found net devices under 0000:da:00.1: mlx_0_1 00:20:06.126 17:24:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.126 17:24:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:06.126 17:24:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:06.126 17:24:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:06.126 17:24:15 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:20:06.126 17:24:15 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:20:06.126 17:24:15 -- nvmf/common.sh@409 -- # rdma_device_init 00:20:06.126 17:24:15 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:20:06.126 17:24:15 -- nvmf/common.sh@58 -- # uname 00:20:06.126 17:24:15 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:06.126 17:24:15 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:06.385 17:24:15 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:06.385 17:24:15 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:06.385 17:24:15 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:06.385 17:24:15 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:06.385 17:24:15 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:06.385 17:24:15 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:06.385 17:24:15 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:20:06.385 17:24:15 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:06.385 17:24:15 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:06.385 17:24:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:06.385 17:24:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:06.385 17:24:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:06.385 17:24:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:06.385 17:24:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:06.385 17:24:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:06.385 17:24:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.385 17:24:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:06.385 17:24:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:06.385 17:24:15 -- nvmf/common.sh@105 -- # continue 2 00:20:06.385 17:24:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:06.385 17:24:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.385 17:24:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:06.385 17:24:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.385 17:24:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:06.385 17:24:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:06.385 17:24:15 -- nvmf/common.sh@105 -- # continue 2 00:20:06.385 17:24:15 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:06.385 17:24:15 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:06.385 17:24:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:06.385 17:24:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:06.385 17:24:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:06.385 17:24:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:06.385 17:24:15 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:06.385 17:24:15 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:06.385 17:24:15 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:06.385 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:06.385 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:20:06.385 altname enp218s0f0np0 00:20:06.385 altname ens818f0np0 00:20:06.385 inet 192.168.100.8/24 scope global mlx_0_0 00:20:06.385 valid_lft forever preferred_lft forever 00:20:06.385 17:24:15 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:06.385 17:24:15 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:06.385 17:24:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:06.385 17:24:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:06.385 17:24:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:06.385 17:24:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:06.385 17:24:15 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:06.385 17:24:15 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:06.385 17:24:15 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:06.385 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:06.385 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:20:06.385 altname enp218s0f1np1 00:20:06.385 altname ens818f1np1 00:20:06.385 inet 192.168.100.9/24 scope global mlx_0_1 00:20:06.385 valid_lft forever preferred_lft forever 00:20:06.385 17:24:15 -- nvmf/common.sh@411 -- # return 0 00:20:06.385 17:24:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:06.385 17:24:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:06.385 17:24:15 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:20:06.385 17:24:15 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:20:06.385 17:24:15 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:06.385 17:24:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:06.385 17:24:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:06.385 17:24:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:06.385 17:24:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:06.385 17:24:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:06.385 17:24:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:06.385 17:24:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.385 17:24:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:06.385 17:24:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:06.385 17:24:15 -- nvmf/common.sh@105 -- # continue 2 00:20:06.385 17:24:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:06.385 17:24:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.385 17:24:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:06.385 17:24:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.385 17:24:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:06.385 17:24:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:06.385 17:24:15 -- nvmf/common.sh@105 -- # continue 2 00:20:06.385 17:24:15 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:06.385 17:24:15 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:06.385 17:24:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:06.385 17:24:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:06.385 17:24:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:06.385 17:24:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:06.385 17:24:15 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:06.385 17:24:15 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:06.385 17:24:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:06.385 17:24:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:06.386 17:24:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:06.386 17:24:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:06.386 17:24:15 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:20:06.386 192.168.100.9' 00:20:06.386 17:24:15 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:06.386 192.168.100.9' 00:20:06.386 17:24:15 -- nvmf/common.sh@446 -- # head -n 1 00:20:06.386 17:24:15 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:06.386 17:24:15 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:20:06.386 192.168.100.9' 00:20:06.386 17:24:15 -- nvmf/common.sh@447 -- # tail -n +2 00:20:06.386 17:24:15 -- nvmf/common.sh@447 -- # head -n 1 00:20:06.386 17:24:15 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:06.386 17:24:15 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:20:06.386 17:24:15 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:06.386 17:24:15 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:20:06.386 17:24:15 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:20:06.386 17:24:15 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:20:06.386 17:24:15 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:20:06.386 17:24:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:06.386 17:24:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:06.386 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:20:06.386 17:24:15 -- nvmf/common.sh@470 -- # nvmfpid=3052265 00:20:06.386 17:24:15 -- nvmf/common.sh@471 -- # waitforlisten 3052265 00:20:06.386 17:24:15 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:06.386 17:24:15 -- common/autotest_common.sh@817 -- # '[' -z 3052265 ']' 00:20:06.386 17:24:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.386 17:24:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:06.386 17:24:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.386 17:24:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:06.386 17:24:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.320 17:24:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:07.320 17:24:16 -- common/autotest_common.sh@850 -- # return 0 00:20:07.320 17:24:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:07.320 17:24:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:07.320 17:24:16 -- common/autotest_common.sh@10 -- # set +x 00:20:07.320 17:24:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.320 17:24:16 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:07.320 17:24:16 -- host/auth.sh@81 -- # gen_key null 32 00:20:07.320 17:24:16 -- host/auth.sh@53 -- # local digest len file key 00:20:07.320 17:24:16 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:07.320 17:24:16 -- host/auth.sh@54 -- # local -A digests 00:20:07.320 17:24:16 -- host/auth.sh@56 -- # digest=null 00:20:07.320 17:24:16 -- host/auth.sh@56 -- # len=32 00:20:07.320 17:24:16 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:07.320 17:24:16 -- host/auth.sh@57 -- # key=3a3c59465ceadf88b793cabb2c927021 00:20:07.320 17:24:16 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:20:07.320 17:24:16 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.a0E 00:20:07.320 17:24:16 -- host/auth.sh@59 -- # format_dhchap_key 3a3c59465ceadf88b793cabb2c927021 0 00:20:07.320 17:24:16 -- nvmf/common.sh@708 -- # format_key DHHC-1 3a3c59465ceadf88b793cabb2c927021 0 00:20:07.320 17:24:16 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:07.320 17:24:16 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:07.320 17:24:16 -- nvmf/common.sh@693 -- # key=3a3c59465ceadf88b793cabb2c927021 00:20:07.320 17:24:16 -- nvmf/common.sh@693 -- # digest=0 00:20:07.320 17:24:16 -- nvmf/common.sh@694 -- # python - 00:20:07.320 17:24:16 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.a0E 00:20:07.320 17:24:16 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.a0E 00:20:07.320 17:24:16 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.a0E 00:20:07.320 17:24:16 -- host/auth.sh@82 -- # gen_key null 48 00:20:07.320 17:24:16 -- host/auth.sh@53 -- # local digest len file key 00:20:07.320 17:24:16 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:07.320 17:24:16 -- host/auth.sh@54 -- # local -A digests 00:20:07.320 17:24:16 -- host/auth.sh@56 -- # digest=null 00:20:07.320 17:24:16 -- host/auth.sh@56 -- # len=48 00:20:07.320 17:24:16 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:07.320 17:24:16 -- host/auth.sh@57 -- # key=9b5606408cc0fb8efc8084ac8c7344cc639c617cfbf3b97d 00:20:07.321 17:24:16 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:20:07.321 17:24:16 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.fh1 00:20:07.321 17:24:16 -- host/auth.sh@59 -- # format_dhchap_key 9b5606408cc0fb8efc8084ac8c7344cc639c617cfbf3b97d 0 00:20:07.321 17:24:16 -- nvmf/common.sh@708 -- # format_key DHHC-1 9b5606408cc0fb8efc8084ac8c7344cc639c617cfbf3b97d 0 00:20:07.321 17:24:16 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:07.321 17:24:16 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:07.321 17:24:16 -- nvmf/common.sh@693 -- # key=9b5606408cc0fb8efc8084ac8c7344cc639c617cfbf3b97d 00:20:07.321 17:24:16 -- nvmf/common.sh@693 -- # digest=0 00:20:07.321 17:24:16 -- nvmf/common.sh@694 -- # python - 00:20:07.321 17:24:16 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.fh1 00:20:07.579 17:24:16 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.fh1 00:20:07.579 17:24:16 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.fh1 00:20:07.579 17:24:16 -- host/auth.sh@83 -- # gen_key sha256 32 00:20:07.579 17:24:16 -- host/auth.sh@53 -- # local digest len file key 00:20:07.579 17:24:16 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:07.579 17:24:16 -- host/auth.sh@54 -- # local -A digests 00:20:07.579 17:24:16 -- host/auth.sh@56 -- # digest=sha256 00:20:07.579 17:24:16 -- host/auth.sh@56 -- # len=32 00:20:07.579 17:24:16 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:07.579 17:24:16 -- host/auth.sh@57 -- # key=816e457f6d1bfab4a72500c80a0d2655 00:20:07.579 17:24:16 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:20:07.579 17:24:16 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.OCN 00:20:07.579 17:24:16 -- host/auth.sh@59 -- # format_dhchap_key 816e457f6d1bfab4a72500c80a0d2655 1 00:20:07.579 17:24:16 -- nvmf/common.sh@708 -- # format_key DHHC-1 816e457f6d1bfab4a72500c80a0d2655 1 00:20:07.579 17:24:16 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:07.579 17:24:16 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:07.579 17:24:16 -- nvmf/common.sh@693 -- # key=816e457f6d1bfab4a72500c80a0d2655 00:20:07.579 17:24:16 -- nvmf/common.sh@693 -- # digest=1 00:20:07.579 17:24:16 -- nvmf/common.sh@694 -- # python - 00:20:07.579 17:24:16 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.OCN 00:20:07.579 17:24:16 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.OCN 00:20:07.579 17:24:16 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.OCN 00:20:07.579 17:24:16 -- host/auth.sh@84 -- # gen_key sha384 48 00:20:07.579 17:24:16 -- host/auth.sh@53 -- # local digest len file key 00:20:07.579 17:24:16 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:07.579 17:24:16 -- host/auth.sh@54 -- # local -A digests 00:20:07.579 17:24:16 -- host/auth.sh@56 -- # digest=sha384 00:20:07.579 17:24:16 -- host/auth.sh@56 -- # len=48 00:20:07.579 17:24:16 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:07.579 17:24:16 -- host/auth.sh@57 -- # key=94d6fd6f9259c9a2687cce4f75bc5616c82406b6bad5edc6 00:20:07.579 17:24:16 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:20:07.579 17:24:16 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.tiH 00:20:07.579 17:24:16 -- host/auth.sh@59 -- # format_dhchap_key 94d6fd6f9259c9a2687cce4f75bc5616c82406b6bad5edc6 2 00:20:07.579 17:24:16 -- nvmf/common.sh@708 -- # format_key DHHC-1 94d6fd6f9259c9a2687cce4f75bc5616c82406b6bad5edc6 2 00:20:07.579 17:24:16 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:07.579 17:24:16 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:07.579 17:24:16 -- nvmf/common.sh@693 -- # key=94d6fd6f9259c9a2687cce4f75bc5616c82406b6bad5edc6 00:20:07.579 17:24:16 -- nvmf/common.sh@693 -- # digest=2 00:20:07.579 17:24:16 -- nvmf/common.sh@694 -- # python - 00:20:07.579 17:24:16 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.tiH 00:20:07.579 17:24:16 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.tiH 00:20:07.579 17:24:16 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.tiH 00:20:07.579 17:24:16 -- host/auth.sh@85 -- # gen_key sha512 64 00:20:07.579 17:24:16 -- host/auth.sh@53 -- # local digest len file key 00:20:07.579 17:24:16 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:07.579 17:24:16 -- host/auth.sh@54 -- # local -A digests 00:20:07.579 17:24:16 -- host/auth.sh@56 -- # digest=sha512 00:20:07.579 17:24:16 -- host/auth.sh@56 -- # len=64 00:20:07.579 17:24:16 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:07.579 17:24:16 -- host/auth.sh@57 -- # key=fca2ce4f69dfb7996f37a1f8905ed1b5a9c9f5bf018a12328785a38315b2d246 00:20:07.579 17:24:16 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:20:07.579 17:24:16 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.69I 00:20:07.579 17:24:16 -- host/auth.sh@59 -- # format_dhchap_key fca2ce4f69dfb7996f37a1f8905ed1b5a9c9f5bf018a12328785a38315b2d246 3 00:20:07.579 17:24:16 -- nvmf/common.sh@708 -- # format_key DHHC-1 fca2ce4f69dfb7996f37a1f8905ed1b5a9c9f5bf018a12328785a38315b2d246 3 00:20:07.579 17:24:16 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:07.579 17:24:16 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:07.579 17:24:16 -- nvmf/common.sh@693 -- # key=fca2ce4f69dfb7996f37a1f8905ed1b5a9c9f5bf018a12328785a38315b2d246 00:20:07.579 17:24:16 -- nvmf/common.sh@693 -- # digest=3 00:20:07.579 17:24:16 -- nvmf/common.sh@694 -- # python - 00:20:07.579 17:24:16 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.69I 00:20:07.579 17:24:16 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.69I 00:20:07.579 17:24:16 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.69I 00:20:07.579 17:24:16 -- host/auth.sh@87 -- # waitforlisten 3052265 00:20:07.579 17:24:16 -- common/autotest_common.sh@817 -- # '[' -z 3052265 ']' 00:20:07.579 17:24:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.579 17:24:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:07.579 17:24:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.579 17:24:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:07.579 17:24:16 -- common/autotest_common.sh@10 -- # set +x 00:20:07.837 17:24:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:07.837 17:24:16 -- common/autotest_common.sh@850 -- # return 0 00:20:07.837 17:24:16 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:07.837 17:24:16 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.a0E 00:20:07.837 17:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.837 17:24:16 -- common/autotest_common.sh@10 -- # set +x 00:20:07.837 17:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.837 17:24:16 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:07.837 17:24:16 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.fh1 00:20:07.837 17:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.837 17:24:16 -- common/autotest_common.sh@10 -- # set +x 00:20:07.837 17:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.837 17:24:16 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:07.837 17:24:16 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.OCN 00:20:07.837 17:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.837 17:24:16 -- common/autotest_common.sh@10 -- # set +x 00:20:07.837 17:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.837 17:24:16 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:07.837 17:24:16 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.tiH 00:20:07.837 17:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.837 17:24:16 -- common/autotest_common.sh@10 -- # set +x 00:20:07.837 17:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.837 17:24:16 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:07.837 17:24:16 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.69I 00:20:07.837 17:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.837 17:24:16 -- common/autotest_common.sh@10 -- # set +x 00:20:07.837 17:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.837 17:24:16 -- host/auth.sh@92 -- # nvmet_auth_init 00:20:07.837 17:24:16 -- host/auth.sh@35 -- # get_main_ns_ip 00:20:07.837 17:24:16 -- nvmf/common.sh@717 -- # local ip 00:20:07.837 17:24:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:07.837 17:24:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:07.837 17:24:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.837 17:24:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.837 17:24:16 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:07.837 17:24:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:07.837 17:24:16 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:07.837 17:24:16 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:07.837 17:24:16 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:07.837 17:24:16 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:20:07.837 17:24:16 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:20:07.837 17:24:16 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:20:07.837 17:24:16 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:07.837 17:24:16 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:07.837 17:24:16 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:07.837 17:24:16 -- nvmf/common.sh@628 -- # local block nvme 00:20:07.837 17:24:16 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:20:07.837 17:24:16 -- nvmf/common.sh@631 -- # modprobe nvmet 00:20:07.837 17:24:16 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:07.837 17:24:16 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:20:10.362 Waiting for block devices as requested 00:20:10.362 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:20:10.621 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:10.621 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:10.908 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:10.908 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:10.908 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:10.908 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:11.175 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:11.175 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:11.175 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:11.175 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:11.431 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:11.431 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:11.431 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:11.688 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:11.688 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:11.688 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:12.280 17:24:21 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:12.280 17:24:21 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:12.280 17:24:21 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:20:12.280 17:24:21 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:12.280 17:24:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:12.280 17:24:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:12.280 17:24:21 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:20:12.280 17:24:21 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:12.280 17:24:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:20:12.280 No valid GPT data, bailing 00:20:12.280 17:24:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:12.280 17:24:21 -- scripts/common.sh@391 -- # pt= 00:20:12.280 17:24:21 -- scripts/common.sh@392 -- # return 1 00:20:12.280 17:24:21 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:20:12.280 17:24:21 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:20:12.280 17:24:21 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:12.280 17:24:21 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:12.538 17:24:21 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:12.538 17:24:21 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:12.538 17:24:21 -- nvmf/common.sh@656 -- # echo 1 00:20:12.538 17:24:21 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:20:12.538 17:24:21 -- nvmf/common.sh@658 -- # echo 1 00:20:12.538 17:24:21 -- nvmf/common.sh@660 -- # echo 192.168.100.8 00:20:12.538 17:24:21 -- nvmf/common.sh@661 -- # echo rdma 00:20:12.538 17:24:21 -- nvmf/common.sh@662 -- # echo 4420 00:20:12.538 17:24:21 -- nvmf/common.sh@663 -- # echo ipv4 00:20:12.538 17:24:21 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:12.538 17:24:21 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:20:12.538 00:20:12.538 Discovery Log Number of Records 2, Generation counter 2 00:20:12.538 =====Discovery Log Entry 0====== 00:20:12.538 trtype: rdma 00:20:12.538 adrfam: ipv4 00:20:12.538 subtype: current discovery subsystem 00:20:12.538 treq: not specified, sq flow control disable supported 00:20:12.538 portid: 1 00:20:12.538 trsvcid: 4420 00:20:12.538 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:12.538 traddr: 192.168.100.8 00:20:12.538 eflags: none 00:20:12.538 rdma_prtype: not specified 00:20:12.538 rdma_qptype: connected 00:20:12.538 rdma_cms: rdma-cm 00:20:12.538 rdma_pkey: 0x0000 00:20:12.538 =====Discovery Log Entry 1====== 00:20:12.538 trtype: rdma 00:20:12.538 adrfam: ipv4 00:20:12.538 subtype: nvme subsystem 00:20:12.538 treq: not specified, sq flow control disable supported 00:20:12.538 portid: 1 00:20:12.538 trsvcid: 4420 00:20:12.538 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:12.538 traddr: 192.168.100.8 00:20:12.538 eflags: none 00:20:12.538 rdma_prtype: not specified 00:20:12.538 rdma_qptype: connected 00:20:12.538 rdma_cms: rdma-cm 00:20:12.538 rdma_pkey: 0x0000 00:20:12.538 17:24:21 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:12.538 17:24:21 -- host/auth.sh@37 -- # echo 0 00:20:12.538 17:24:21 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:12.538 17:24:21 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:12.538 17:24:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:12.538 17:24:21 -- host/auth.sh@44 -- # digest=sha256 00:20:12.538 17:24:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:12.538 17:24:21 -- host/auth.sh@44 -- # keyid=1 00:20:12.538 17:24:21 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:12.538 17:24:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:12.538 17:24:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:12.538 17:24:21 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:12.538 17:24:21 -- host/auth.sh@100 -- # IFS=, 00:20:12.538 17:24:21 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:20:12.538 17:24:21 -- host/auth.sh@100 -- # IFS=, 00:20:12.538 17:24:21 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:12.538 17:24:21 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:12.539 17:24:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:12.539 17:24:21 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:20:12.539 17:24:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:12.539 17:24:21 -- host/auth.sh@68 -- # keyid=1 00:20:12.539 17:24:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:12.539 17:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.539 17:24:21 -- common/autotest_common.sh@10 -- # set +x 00:20:12.539 17:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.539 17:24:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:12.539 17:24:21 -- nvmf/common.sh@717 -- # local ip 00:20:12.539 17:24:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:12.539 17:24:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:12.539 17:24:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.539 17:24:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.539 17:24:21 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:12.539 17:24:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:12.539 17:24:21 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:12.539 17:24:21 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:12.539 17:24:21 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:12.539 17:24:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:12.539 17:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.539 17:24:21 -- common/autotest_common.sh@10 -- # set +x 00:20:12.796 nvme0n1 00:20:12.796 17:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.796 17:24:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.796 17:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.796 17:24:21 -- common/autotest_common.sh@10 -- # set +x 00:20:12.796 17:24:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:12.796 17:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.796 17:24:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.796 17:24:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.796 17:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.796 17:24:21 -- common/autotest_common.sh@10 -- # set +x 00:20:12.796 17:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.796 17:24:21 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:12.796 17:24:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.796 17:24:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:12.796 17:24:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:12.796 17:24:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:12.796 17:24:21 -- host/auth.sh@44 -- # digest=sha256 00:20:12.796 17:24:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:12.796 17:24:21 -- host/auth.sh@44 -- # keyid=0 00:20:12.796 17:24:21 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:12.796 17:24:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:12.796 17:24:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:12.796 17:24:21 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:12.796 17:24:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:20:12.796 17:24:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:12.796 17:24:21 -- host/auth.sh@68 -- # digest=sha256 00:20:12.796 17:24:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:12.796 17:24:21 -- host/auth.sh@68 -- # keyid=0 00:20:12.796 17:24:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.796 17:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.796 17:24:21 -- common/autotest_common.sh@10 -- # set +x 00:20:12.796 17:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.796 17:24:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:12.796 17:24:22 -- nvmf/common.sh@717 -- # local ip 00:20:12.796 17:24:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:12.796 17:24:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:12.796 17:24:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.796 17:24:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.796 17:24:22 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:12.796 17:24:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:12.796 17:24:22 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:12.796 17:24:22 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:12.796 17:24:22 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:12.796 17:24:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:12.796 17:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.796 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.053 nvme0n1 00:20:13.053 17:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.053 17:24:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.053 17:24:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:13.053 17:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.053 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.053 17:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.053 17:24:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.053 17:24:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.053 17:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.053 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.053 17:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.053 17:24:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:13.053 17:24:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:13.053 17:24:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:13.053 17:24:22 -- host/auth.sh@44 -- # digest=sha256 00:20:13.053 17:24:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.054 17:24:22 -- host/auth.sh@44 -- # keyid=1 00:20:13.054 17:24:22 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:13.054 17:24:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:13.054 17:24:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:13.054 17:24:22 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:13.054 17:24:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:20:13.054 17:24:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:13.054 17:24:22 -- host/auth.sh@68 -- # digest=sha256 00:20:13.054 17:24:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:13.054 17:24:22 -- host/auth.sh@68 -- # keyid=1 00:20:13.054 17:24:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:13.054 17:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.054 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.054 17:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.054 17:24:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:13.054 17:24:22 -- nvmf/common.sh@717 -- # local ip 00:20:13.054 17:24:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:13.054 17:24:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:13.054 17:24:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.054 17:24:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.054 17:24:22 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:13.054 17:24:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:13.054 17:24:22 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:13.054 17:24:22 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:13.054 17:24:22 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:13.054 17:24:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:13.054 17:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.054 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.311 nvme0n1 00:20:13.311 17:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.311 17:24:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.311 17:24:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:13.311 17:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.311 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.311 17:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.311 17:24:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.311 17:24:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.311 17:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.311 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.311 17:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.311 17:24:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:13.311 17:24:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:13.311 17:24:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:13.568 17:24:22 -- host/auth.sh@44 -- # digest=sha256 00:20:13.568 17:24:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.568 17:24:22 -- host/auth.sh@44 -- # keyid=2 00:20:13.568 17:24:22 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:13.568 17:24:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:13.568 17:24:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:13.568 17:24:22 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:13.568 17:24:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:20:13.568 17:24:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:13.568 17:24:22 -- host/auth.sh@68 -- # digest=sha256 00:20:13.568 17:24:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:13.568 17:24:22 -- host/auth.sh@68 -- # keyid=2 00:20:13.568 17:24:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:13.568 17:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.568 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.568 17:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.568 17:24:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:13.568 17:24:22 -- nvmf/common.sh@717 -- # local ip 00:20:13.569 17:24:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:13.569 17:24:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:13.569 17:24:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.569 17:24:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.569 17:24:22 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:13.569 17:24:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:13.569 17:24:22 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:13.569 17:24:22 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:13.569 17:24:22 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:13.569 17:24:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:13.569 17:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.569 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.569 nvme0n1 00:20:13.569 17:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.569 17:24:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.569 17:24:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:13.569 17:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.569 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.569 17:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.569 17:24:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.569 17:24:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.826 17:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.826 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.826 17:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.826 17:24:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:13.826 17:24:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:13.826 17:24:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:13.826 17:24:22 -- host/auth.sh@44 -- # digest=sha256 00:20:13.826 17:24:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.826 17:24:22 -- host/auth.sh@44 -- # keyid=3 00:20:13.826 17:24:22 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:13.826 17:24:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:13.826 17:24:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:13.826 17:24:22 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:13.826 17:24:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:20:13.826 17:24:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:13.826 17:24:22 -- host/auth.sh@68 -- # digest=sha256 00:20:13.826 17:24:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:13.826 17:24:22 -- host/auth.sh@68 -- # keyid=3 00:20:13.826 17:24:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:13.826 17:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.826 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.826 17:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.826 17:24:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:13.826 17:24:22 -- nvmf/common.sh@717 -- # local ip 00:20:13.826 17:24:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:13.826 17:24:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:13.826 17:24:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.826 17:24:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.826 17:24:22 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:13.826 17:24:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:13.826 17:24:22 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:13.826 17:24:22 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:13.826 17:24:22 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:13.826 17:24:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:13.826 17:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.826 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.826 nvme0n1 00:20:13.826 17:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.826 17:24:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.826 17:24:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:13.826 17:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.826 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:13.826 17:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.084 17:24:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.084 17:24:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.084 17:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.084 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:14.084 17:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.084 17:24:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:14.084 17:24:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:14.084 17:24:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:14.084 17:24:23 -- host/auth.sh@44 -- # digest=sha256 00:20:14.084 17:24:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.084 17:24:23 -- host/auth.sh@44 -- # keyid=4 00:20:14.084 17:24:23 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:14.084 17:24:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:14.084 17:24:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:14.084 17:24:23 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:14.084 17:24:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:20:14.084 17:24:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:14.084 17:24:23 -- host/auth.sh@68 -- # digest=sha256 00:20:14.084 17:24:23 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:14.084 17:24:23 -- host/auth.sh@68 -- # keyid=4 00:20:14.084 17:24:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.084 17:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.084 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:14.084 17:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.084 17:24:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:14.084 17:24:23 -- nvmf/common.sh@717 -- # local ip 00:20:14.084 17:24:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:14.084 17:24:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:14.084 17:24:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.084 17:24:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.084 17:24:23 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:14.084 17:24:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:14.084 17:24:23 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:14.084 17:24:23 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:14.084 17:24:23 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:14.084 17:24:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:14.084 17:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.084 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:14.342 nvme0n1 00:20:14.342 17:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.342 17:24:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.342 17:24:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:14.342 17:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.342 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:14.342 17:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.342 17:24:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.342 17:24:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.342 17:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.342 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:14.342 17:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.342 17:24:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.342 17:24:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:14.342 17:24:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:14.342 17:24:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:14.342 17:24:23 -- host/auth.sh@44 -- # digest=sha256 00:20:14.342 17:24:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.342 17:24:23 -- host/auth.sh@44 -- # keyid=0 00:20:14.342 17:24:23 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:14.342 17:24:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:14.342 17:24:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:14.342 17:24:23 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:14.342 17:24:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:20:14.342 17:24:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:14.342 17:24:23 -- host/auth.sh@68 -- # digest=sha256 00:20:14.342 17:24:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:14.342 17:24:23 -- host/auth.sh@68 -- # keyid=0 00:20:14.342 17:24:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:14.342 17:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.342 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:14.342 17:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.342 17:24:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:14.342 17:24:23 -- nvmf/common.sh@717 -- # local ip 00:20:14.342 17:24:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:14.342 17:24:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:14.342 17:24:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.342 17:24:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.342 17:24:23 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:14.342 17:24:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:14.342 17:24:23 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:14.342 17:24:23 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:14.342 17:24:23 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:14.342 17:24:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:14.342 17:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.342 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:14.600 nvme0n1 00:20:14.600 17:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.600 17:24:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.600 17:24:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:14.600 17:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.600 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:14.600 17:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.600 17:24:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.600 17:24:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.600 17:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.600 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:14.600 17:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.600 17:24:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:14.600 17:24:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:14.600 17:24:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:14.600 17:24:23 -- host/auth.sh@44 -- # digest=sha256 00:20:14.600 17:24:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.600 17:24:23 -- host/auth.sh@44 -- # keyid=1 00:20:14.600 17:24:23 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:14.600 17:24:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:14.600 17:24:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:14.600 17:24:23 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:14.600 17:24:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:20:14.600 17:24:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:14.600 17:24:23 -- host/auth.sh@68 -- # digest=sha256 00:20:14.600 17:24:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:14.600 17:24:23 -- host/auth.sh@68 -- # keyid=1 00:20:14.600 17:24:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:14.600 17:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.600 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:14.600 17:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.600 17:24:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:14.600 17:24:23 -- nvmf/common.sh@717 -- # local ip 00:20:14.600 17:24:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:14.600 17:24:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:14.600 17:24:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.600 17:24:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.600 17:24:23 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:14.600 17:24:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:14.600 17:24:23 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:14.600 17:24:23 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:14.600 17:24:23 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:14.600 17:24:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:14.600 17:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.600 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:14.857 nvme0n1 00:20:14.857 17:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.857 17:24:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.857 17:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.857 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:14.857 17:24:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:14.857 17:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.857 17:24:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.857 17:24:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.857 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.857 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:14.857 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.857 17:24:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:14.857 17:24:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:14.857 17:24:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:14.857 17:24:24 -- host/auth.sh@44 -- # digest=sha256 00:20:14.857 17:24:24 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.857 17:24:24 -- host/auth.sh@44 -- # keyid=2 00:20:14.857 17:24:24 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:14.857 17:24:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:14.857 17:24:24 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:14.857 17:24:24 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:14.857 17:24:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:20:14.857 17:24:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:14.857 17:24:24 -- host/auth.sh@68 -- # digest=sha256 00:20:14.857 17:24:24 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:14.857 17:24:24 -- host/auth.sh@68 -- # keyid=2 00:20:14.857 17:24:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:14.857 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.857 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:14.857 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.857 17:24:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:14.857 17:24:24 -- nvmf/common.sh@717 -- # local ip 00:20:14.858 17:24:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:14.858 17:24:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:14.858 17:24:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.858 17:24:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.858 17:24:24 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:14.858 17:24:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:14.858 17:24:24 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:14.858 17:24:24 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:14.858 17:24:24 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:14.858 17:24:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:14.858 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.858 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:15.115 nvme0n1 00:20:15.115 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.115 17:24:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:15.115 17:24:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.115 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.115 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:15.115 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.115 17:24:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.115 17:24:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.115 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.115 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:15.115 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.115 17:24:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:15.115 17:24:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:15.115 17:24:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:15.116 17:24:24 -- host/auth.sh@44 -- # digest=sha256 00:20:15.116 17:24:24 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:15.116 17:24:24 -- host/auth.sh@44 -- # keyid=3 00:20:15.116 17:24:24 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:15.116 17:24:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:15.116 17:24:24 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:15.116 17:24:24 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:15.116 17:24:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:20:15.116 17:24:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:15.116 17:24:24 -- host/auth.sh@68 -- # digest=sha256 00:20:15.116 17:24:24 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:15.116 17:24:24 -- host/auth.sh@68 -- # keyid=3 00:20:15.116 17:24:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:15.116 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.116 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:15.116 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.116 17:24:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:15.116 17:24:24 -- nvmf/common.sh@717 -- # local ip 00:20:15.116 17:24:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:15.116 17:24:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:15.116 17:24:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.116 17:24:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.116 17:24:24 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:15.116 17:24:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:15.116 17:24:24 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:15.116 17:24:24 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:15.116 17:24:24 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:15.116 17:24:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:15.116 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.116 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:15.373 nvme0n1 00:20:15.373 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.373 17:24:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.373 17:24:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:15.373 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.373 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:15.373 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.630 17:24:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.630 17:24:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.630 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.630 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:15.630 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.630 17:24:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:15.630 17:24:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:15.630 17:24:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:15.630 17:24:24 -- host/auth.sh@44 -- # digest=sha256 00:20:15.630 17:24:24 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:15.630 17:24:24 -- host/auth.sh@44 -- # keyid=4 00:20:15.630 17:24:24 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:15.630 17:24:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:15.630 17:24:24 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:15.630 17:24:24 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:15.630 17:24:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:20:15.630 17:24:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:15.630 17:24:24 -- host/auth.sh@68 -- # digest=sha256 00:20:15.630 17:24:24 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:15.630 17:24:24 -- host/auth.sh@68 -- # keyid=4 00:20:15.630 17:24:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:15.630 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.630 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:15.630 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.631 17:24:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:15.631 17:24:24 -- nvmf/common.sh@717 -- # local ip 00:20:15.631 17:24:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:15.631 17:24:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:15.631 17:24:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.631 17:24:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.631 17:24:24 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:15.631 17:24:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:15.631 17:24:24 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:15.631 17:24:24 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:15.631 17:24:24 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:15.631 17:24:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:15.631 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.631 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:15.888 nvme0n1 00:20:15.888 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.888 17:24:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.888 17:24:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:15.888 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.888 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:15.888 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.888 17:24:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.888 17:24:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.888 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.888 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:15.888 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.888 17:24:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.888 17:24:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:15.888 17:24:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:15.888 17:24:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:15.888 17:24:24 -- host/auth.sh@44 -- # digest=sha256 00:20:15.888 17:24:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:15.888 17:24:24 -- host/auth.sh@44 -- # keyid=0 00:20:15.888 17:24:24 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:15.888 17:24:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:15.888 17:24:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:15.888 17:24:24 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:15.888 17:24:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:20:15.888 17:24:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:15.888 17:24:24 -- host/auth.sh@68 -- # digest=sha256 00:20:15.888 17:24:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:15.888 17:24:24 -- host/auth.sh@68 -- # keyid=0 00:20:15.888 17:24:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.888 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.888 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:15.888 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.888 17:24:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:15.888 17:24:24 -- nvmf/common.sh@717 -- # local ip 00:20:15.888 17:24:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:15.888 17:24:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:15.888 17:24:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.888 17:24:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.888 17:24:24 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:15.888 17:24:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:15.888 17:24:24 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:15.888 17:24:24 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:15.888 17:24:24 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:15.888 17:24:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:15.888 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.888 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:16.145 nvme0n1 00:20:16.145 17:24:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.145 17:24:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.145 17:24:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.145 17:24:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.145 17:24:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:16.145 17:24:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.146 17:24:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.146 17:24:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.146 17:24:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.146 17:24:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.146 17:24:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.146 17:24:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:16.146 17:24:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:16.146 17:24:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:16.146 17:24:25 -- host/auth.sh@44 -- # digest=sha256 00:20:16.146 17:24:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:16.146 17:24:25 -- host/auth.sh@44 -- # keyid=1 00:20:16.146 17:24:25 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:16.146 17:24:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:16.146 17:24:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:16.146 17:24:25 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:16.146 17:24:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:20:16.146 17:24:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:16.146 17:24:25 -- host/auth.sh@68 -- # digest=sha256 00:20:16.146 17:24:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:16.146 17:24:25 -- host/auth.sh@68 -- # keyid=1 00:20:16.146 17:24:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:16.146 17:24:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.146 17:24:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.146 17:24:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.146 17:24:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:16.146 17:24:25 -- nvmf/common.sh@717 -- # local ip 00:20:16.146 17:24:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:16.146 17:24:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:16.146 17:24:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.146 17:24:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.146 17:24:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:16.146 17:24:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:16.146 17:24:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:16.146 17:24:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:16.146 17:24:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:16.146 17:24:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:16.146 17:24:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.146 17:24:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.711 nvme0n1 00:20:16.711 17:24:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.711 17:24:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.711 17:24:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:16.711 17:24:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.711 17:24:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.711 17:24:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.711 17:24:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.711 17:24:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.711 17:24:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.711 17:24:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.711 17:24:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.711 17:24:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:16.711 17:24:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:16.711 17:24:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:16.711 17:24:25 -- host/auth.sh@44 -- # digest=sha256 00:20:16.711 17:24:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:16.711 17:24:25 -- host/auth.sh@44 -- # keyid=2 00:20:16.711 17:24:25 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:16.711 17:24:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:16.711 17:24:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:16.711 17:24:25 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:16.711 17:24:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:20:16.711 17:24:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:16.711 17:24:25 -- host/auth.sh@68 -- # digest=sha256 00:20:16.711 17:24:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:16.711 17:24:25 -- host/auth.sh@68 -- # keyid=2 00:20:16.711 17:24:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:16.711 17:24:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.711 17:24:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.711 17:24:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.711 17:24:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:16.711 17:24:25 -- nvmf/common.sh@717 -- # local ip 00:20:16.711 17:24:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:16.711 17:24:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:16.711 17:24:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.711 17:24:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.711 17:24:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:16.711 17:24:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:16.711 17:24:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:16.711 17:24:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:16.711 17:24:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:16.711 17:24:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:16.711 17:24:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.711 17:24:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.970 nvme0n1 00:20:16.970 17:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.970 17:24:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.970 17:24:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:16.970 17:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.970 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:20:16.970 17:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.970 17:24:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.970 17:24:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.970 17:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.970 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:20:16.970 17:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.970 17:24:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:16.970 17:24:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:16.970 17:24:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:16.970 17:24:26 -- host/auth.sh@44 -- # digest=sha256 00:20:16.970 17:24:26 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:16.970 17:24:26 -- host/auth.sh@44 -- # keyid=3 00:20:16.970 17:24:26 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:16.970 17:24:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:16.970 17:24:26 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:16.970 17:24:26 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:16.970 17:24:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:20:16.970 17:24:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:16.970 17:24:26 -- host/auth.sh@68 -- # digest=sha256 00:20:16.970 17:24:26 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:16.970 17:24:26 -- host/auth.sh@68 -- # keyid=3 00:20:16.970 17:24:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:16.970 17:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.970 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:20:16.970 17:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.970 17:24:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:16.970 17:24:26 -- nvmf/common.sh@717 -- # local ip 00:20:16.970 17:24:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:16.970 17:24:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:16.970 17:24:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.970 17:24:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.970 17:24:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:16.970 17:24:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:16.970 17:24:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:16.970 17:24:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:16.970 17:24:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:16.970 17:24:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:16.970 17:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.970 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:20:17.228 nvme0n1 00:20:17.228 17:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.228 17:24:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.228 17:24:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:17.228 17:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.228 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:20:17.228 17:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.486 17:24:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.486 17:24:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.486 17:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.486 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:20:17.486 17:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.486 17:24:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:17.486 17:24:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:17.486 17:24:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:17.486 17:24:26 -- host/auth.sh@44 -- # digest=sha256 00:20:17.486 17:24:26 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:17.486 17:24:26 -- host/auth.sh@44 -- # keyid=4 00:20:17.486 17:24:26 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:17.486 17:24:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:17.486 17:24:26 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:17.486 17:24:26 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:17.486 17:24:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:20:17.486 17:24:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:17.486 17:24:26 -- host/auth.sh@68 -- # digest=sha256 00:20:17.486 17:24:26 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:17.486 17:24:26 -- host/auth.sh@68 -- # keyid=4 00:20:17.486 17:24:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.486 17:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.486 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:20:17.486 17:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.486 17:24:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:17.486 17:24:26 -- nvmf/common.sh@717 -- # local ip 00:20:17.486 17:24:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:17.486 17:24:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:17.486 17:24:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.486 17:24:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.486 17:24:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:17.486 17:24:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:17.486 17:24:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:17.486 17:24:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:17.486 17:24:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:17.486 17:24:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:17.486 17:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.486 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:20:17.744 nvme0n1 00:20:17.744 17:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.744 17:24:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.744 17:24:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:17.744 17:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.744 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:20:17.744 17:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.744 17:24:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.744 17:24:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.744 17:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.744 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:20:17.744 17:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.744 17:24:26 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.744 17:24:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:17.744 17:24:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:17.744 17:24:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:17.744 17:24:26 -- host/auth.sh@44 -- # digest=sha256 00:20:17.744 17:24:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:17.744 17:24:26 -- host/auth.sh@44 -- # keyid=0 00:20:17.744 17:24:26 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:17.744 17:24:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:17.744 17:24:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:17.744 17:24:26 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:17.744 17:24:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:20:17.744 17:24:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:17.744 17:24:26 -- host/auth.sh@68 -- # digest=sha256 00:20:17.744 17:24:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:17.744 17:24:26 -- host/auth.sh@68 -- # keyid=0 00:20:17.744 17:24:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.744 17:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.744 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:20:17.744 17:24:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.744 17:24:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:17.744 17:24:26 -- nvmf/common.sh@717 -- # local ip 00:20:17.744 17:24:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:17.744 17:24:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:17.744 17:24:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.744 17:24:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.744 17:24:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:17.744 17:24:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:17.744 17:24:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:17.744 17:24:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:17.744 17:24:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:17.744 17:24:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:17.744 17:24:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.744 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:20:18.310 nvme0n1 00:20:18.310 17:24:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.310 17:24:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.310 17:24:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:18.310 17:24:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.310 17:24:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.310 17:24:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.310 17:24:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.310 17:24:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.310 17:24:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.310 17:24:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.310 17:24:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.310 17:24:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:18.310 17:24:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:18.310 17:24:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:18.310 17:24:27 -- host/auth.sh@44 -- # digest=sha256 00:20:18.310 17:24:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:18.310 17:24:27 -- host/auth.sh@44 -- # keyid=1 00:20:18.310 17:24:27 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:18.310 17:24:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:18.310 17:24:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:18.310 17:24:27 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:18.310 17:24:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:20:18.310 17:24:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:18.310 17:24:27 -- host/auth.sh@68 -- # digest=sha256 00:20:18.310 17:24:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:18.310 17:24:27 -- host/auth.sh@68 -- # keyid=1 00:20:18.310 17:24:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.310 17:24:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.310 17:24:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.310 17:24:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.310 17:24:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:18.310 17:24:27 -- nvmf/common.sh@717 -- # local ip 00:20:18.310 17:24:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:18.310 17:24:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:18.310 17:24:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.310 17:24:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.310 17:24:27 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:18.310 17:24:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:18.310 17:24:27 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:18.310 17:24:27 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:18.310 17:24:27 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:18.310 17:24:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:18.310 17:24:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.310 17:24:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.876 nvme0n1 00:20:18.876 17:24:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.876 17:24:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.876 17:24:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.876 17:24:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.876 17:24:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:18.876 17:24:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.876 17:24:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.876 17:24:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.876 17:24:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.876 17:24:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.876 17:24:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.876 17:24:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:18.876 17:24:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:18.876 17:24:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:18.876 17:24:27 -- host/auth.sh@44 -- # digest=sha256 00:20:18.876 17:24:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:18.876 17:24:27 -- host/auth.sh@44 -- # keyid=2 00:20:18.876 17:24:27 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:18.876 17:24:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:18.876 17:24:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:18.876 17:24:27 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:18.876 17:24:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:20:18.876 17:24:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:18.876 17:24:27 -- host/auth.sh@68 -- # digest=sha256 00:20:18.876 17:24:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:18.876 17:24:27 -- host/auth.sh@68 -- # keyid=2 00:20:18.876 17:24:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.876 17:24:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.876 17:24:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.876 17:24:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.876 17:24:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:18.876 17:24:27 -- nvmf/common.sh@717 -- # local ip 00:20:18.876 17:24:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:18.876 17:24:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:18.876 17:24:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.876 17:24:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.876 17:24:27 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:18.876 17:24:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:18.876 17:24:27 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:18.876 17:24:27 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:18.876 17:24:27 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:18.876 17:24:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:18.876 17:24:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.876 17:24:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.134 nvme0n1 00:20:19.134 17:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.134 17:24:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.134 17:24:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:19.134 17:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.134 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:20:19.134 17:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.392 17:24:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.392 17:24:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.392 17:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.392 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:20:19.392 17:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.392 17:24:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:19.392 17:24:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:19.392 17:24:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:19.392 17:24:28 -- host/auth.sh@44 -- # digest=sha256 00:20:19.392 17:24:28 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:19.392 17:24:28 -- host/auth.sh@44 -- # keyid=3 00:20:19.392 17:24:28 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:19.392 17:24:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:19.392 17:24:28 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:19.392 17:24:28 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:19.392 17:24:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:20:19.392 17:24:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:19.392 17:24:28 -- host/auth.sh@68 -- # digest=sha256 00:20:19.392 17:24:28 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:19.392 17:24:28 -- host/auth.sh@68 -- # keyid=3 00:20:19.392 17:24:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.392 17:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.392 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:20:19.392 17:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.392 17:24:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:19.392 17:24:28 -- nvmf/common.sh@717 -- # local ip 00:20:19.392 17:24:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:19.392 17:24:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:19.392 17:24:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.392 17:24:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.392 17:24:28 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:19.392 17:24:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:19.392 17:24:28 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:19.392 17:24:28 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:19.392 17:24:28 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:19.392 17:24:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:19.392 17:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.392 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:20:19.650 nvme0n1 00:20:19.650 17:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.650 17:24:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.650 17:24:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:19.650 17:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.650 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:20:19.908 17:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.908 17:24:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.908 17:24:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.908 17:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.908 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:20:19.908 17:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.908 17:24:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:19.908 17:24:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:19.908 17:24:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:19.908 17:24:28 -- host/auth.sh@44 -- # digest=sha256 00:20:19.908 17:24:28 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:19.908 17:24:28 -- host/auth.sh@44 -- # keyid=4 00:20:19.908 17:24:28 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:19.908 17:24:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:19.908 17:24:28 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:19.908 17:24:28 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:19.908 17:24:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:20:19.908 17:24:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:19.908 17:24:28 -- host/auth.sh@68 -- # digest=sha256 00:20:19.908 17:24:28 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:19.908 17:24:28 -- host/auth.sh@68 -- # keyid=4 00:20:19.908 17:24:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.908 17:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.908 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:20:19.908 17:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.908 17:24:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:19.908 17:24:28 -- nvmf/common.sh@717 -- # local ip 00:20:19.908 17:24:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:19.908 17:24:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:19.908 17:24:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.908 17:24:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.908 17:24:28 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:19.908 17:24:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:19.908 17:24:28 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:19.908 17:24:28 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:19.908 17:24:28 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:19.908 17:24:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:19.908 17:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.908 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:20:20.229 nvme0n1 00:20:20.229 17:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.229 17:24:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.229 17:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.229 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:20:20.229 17:24:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.229 17:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.229 17:24:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.229 17:24:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.229 17:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.229 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:20:20.229 17:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.229 17:24:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.229 17:24:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.229 17:24:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:20.229 17:24:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.230 17:24:29 -- host/auth.sh@44 -- # digest=sha256 00:20:20.230 17:24:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:20.230 17:24:29 -- host/auth.sh@44 -- # keyid=0 00:20:20.230 17:24:29 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:20.230 17:24:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:20.230 17:24:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:20.230 17:24:29 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:20.230 17:24:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:20:20.230 17:24:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.230 17:24:29 -- host/auth.sh@68 -- # digest=sha256 00:20:20.230 17:24:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:20.230 17:24:29 -- host/auth.sh@68 -- # keyid=0 00:20:20.230 17:24:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.230 17:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.230 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:20:20.230 17:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.230 17:24:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:20.230 17:24:29 -- nvmf/common.sh@717 -- # local ip 00:20:20.230 17:24:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:20.230 17:24:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:20.230 17:24:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.230 17:24:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.230 17:24:29 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:20.230 17:24:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:20.230 17:24:29 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:20.230 17:24:29 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:20.230 17:24:29 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:20.230 17:24:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:20.230 17:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.230 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:20:20.890 nvme0n1 00:20:20.890 17:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.890 17:24:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.890 17:24:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.891 17:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.891 17:24:30 -- common/autotest_common.sh@10 -- # set +x 00:20:20.891 17:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.891 17:24:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.891 17:24:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.891 17:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.891 17:24:30 -- common/autotest_common.sh@10 -- # set +x 00:20:20.891 17:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.891 17:24:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.891 17:24:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:20.891 17:24:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.891 17:24:30 -- host/auth.sh@44 -- # digest=sha256 00:20:20.891 17:24:30 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:20.891 17:24:30 -- host/auth.sh@44 -- # keyid=1 00:20:20.891 17:24:30 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:20.891 17:24:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:20.891 17:24:30 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:20.891 17:24:30 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:20.891 17:24:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:20:20.891 17:24:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.891 17:24:30 -- host/auth.sh@68 -- # digest=sha256 00:20:20.891 17:24:30 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:20.891 17:24:30 -- host/auth.sh@68 -- # keyid=1 00:20:20.891 17:24:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.891 17:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.891 17:24:30 -- common/autotest_common.sh@10 -- # set +x 00:20:20.891 17:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.891 17:24:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:20.891 17:24:30 -- nvmf/common.sh@717 -- # local ip 00:20:20.891 17:24:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:20.891 17:24:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:20.891 17:24:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.891 17:24:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.891 17:24:30 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:20.891 17:24:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:20.891 17:24:30 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:20.891 17:24:30 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:20.891 17:24:30 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:20.891 17:24:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:20.891 17:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.891 17:24:30 -- common/autotest_common.sh@10 -- # set +x 00:20:21.824 nvme0n1 00:20:21.824 17:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.824 17:24:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.824 17:24:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:21.824 17:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.824 17:24:30 -- common/autotest_common.sh@10 -- # set +x 00:20:21.824 17:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.824 17:24:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.824 17:24:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.824 17:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.824 17:24:30 -- common/autotest_common.sh@10 -- # set +x 00:20:21.824 17:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.824 17:24:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:21.824 17:24:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:21.824 17:24:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:21.824 17:24:30 -- host/auth.sh@44 -- # digest=sha256 00:20:21.824 17:24:30 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:21.824 17:24:30 -- host/auth.sh@44 -- # keyid=2 00:20:21.825 17:24:30 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:21.825 17:24:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:21.825 17:24:30 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:21.825 17:24:30 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:21.825 17:24:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:20:21.825 17:24:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:21.825 17:24:30 -- host/auth.sh@68 -- # digest=sha256 00:20:21.825 17:24:30 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:21.825 17:24:30 -- host/auth.sh@68 -- # keyid=2 00:20:21.825 17:24:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.825 17:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.825 17:24:30 -- common/autotest_common.sh@10 -- # set +x 00:20:21.825 17:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.825 17:24:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:21.825 17:24:30 -- nvmf/common.sh@717 -- # local ip 00:20:21.825 17:24:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:21.825 17:24:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:21.825 17:24:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.825 17:24:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.825 17:24:30 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:21.825 17:24:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:21.825 17:24:30 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:21.825 17:24:30 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:21.825 17:24:30 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:21.825 17:24:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:21.825 17:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.825 17:24:30 -- common/autotest_common.sh@10 -- # set +x 00:20:22.391 nvme0n1 00:20:22.391 17:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.391 17:24:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.391 17:24:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:22.391 17:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.391 17:24:31 -- common/autotest_common.sh@10 -- # set +x 00:20:22.391 17:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.391 17:24:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.391 17:24:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.391 17:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.391 17:24:31 -- common/autotest_common.sh@10 -- # set +x 00:20:22.391 17:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.391 17:24:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:22.391 17:24:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:22.391 17:24:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:22.391 17:24:31 -- host/auth.sh@44 -- # digest=sha256 00:20:22.391 17:24:31 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:22.391 17:24:31 -- host/auth.sh@44 -- # keyid=3 00:20:22.391 17:24:31 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:22.391 17:24:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:22.391 17:24:31 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:22.391 17:24:31 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:22.391 17:24:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:20:22.392 17:24:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:22.392 17:24:31 -- host/auth.sh@68 -- # digest=sha256 00:20:22.392 17:24:31 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:22.392 17:24:31 -- host/auth.sh@68 -- # keyid=3 00:20:22.392 17:24:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:22.392 17:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.392 17:24:31 -- common/autotest_common.sh@10 -- # set +x 00:20:22.392 17:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.392 17:24:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:22.392 17:24:31 -- nvmf/common.sh@717 -- # local ip 00:20:22.392 17:24:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:22.392 17:24:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:22.392 17:24:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.392 17:24:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.392 17:24:31 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:22.392 17:24:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:22.392 17:24:31 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:22.392 17:24:31 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:22.392 17:24:31 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:22.392 17:24:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:22.392 17:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.392 17:24:31 -- common/autotest_common.sh@10 -- # set +x 00:20:22.957 nvme0n1 00:20:22.957 17:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.957 17:24:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.957 17:24:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:22.957 17:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.957 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:20:22.957 17:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.216 17:24:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.216 17:24:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.216 17:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.216 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:20:23.216 17:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.216 17:24:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:23.216 17:24:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:23.216 17:24:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:23.216 17:24:32 -- host/auth.sh@44 -- # digest=sha256 00:20:23.216 17:24:32 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:23.216 17:24:32 -- host/auth.sh@44 -- # keyid=4 00:20:23.216 17:24:32 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:23.216 17:24:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:23.216 17:24:32 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:23.216 17:24:32 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:23.216 17:24:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:20:23.216 17:24:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:23.216 17:24:32 -- host/auth.sh@68 -- # digest=sha256 00:20:23.216 17:24:32 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:23.216 17:24:32 -- host/auth.sh@68 -- # keyid=4 00:20:23.216 17:24:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.216 17:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.216 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:20:23.216 17:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.216 17:24:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:23.216 17:24:32 -- nvmf/common.sh@717 -- # local ip 00:20:23.216 17:24:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:23.216 17:24:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:23.216 17:24:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.216 17:24:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.216 17:24:32 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:23.216 17:24:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:23.216 17:24:32 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:23.216 17:24:32 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:23.216 17:24:32 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:23.216 17:24:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:23.216 17:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.216 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:20:23.781 nvme0n1 00:20:23.781 17:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.781 17:24:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.781 17:24:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:23.781 17:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.781 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:20:23.781 17:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.781 17:24:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.781 17:24:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.781 17:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.781 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:20:23.781 17:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.781 17:24:32 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:23.781 17:24:32 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.781 17:24:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:23.781 17:24:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:23.781 17:24:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:23.781 17:24:32 -- host/auth.sh@44 -- # digest=sha384 00:20:23.781 17:24:32 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:23.781 17:24:32 -- host/auth.sh@44 -- # keyid=0 00:20:23.781 17:24:32 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:23.781 17:24:32 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:23.781 17:24:32 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:23.781 17:24:32 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:23.781 17:24:32 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:20:23.781 17:24:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:23.781 17:24:32 -- host/auth.sh@68 -- # digest=sha384 00:20:23.781 17:24:32 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:23.781 17:24:32 -- host/auth.sh@68 -- # keyid=0 00:20:23.781 17:24:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.781 17:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.781 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:20:23.781 17:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.781 17:24:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:23.781 17:24:32 -- nvmf/common.sh@717 -- # local ip 00:20:23.781 17:24:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:23.781 17:24:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:23.781 17:24:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.781 17:24:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.781 17:24:32 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:23.781 17:24:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:23.781 17:24:32 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:23.781 17:24:32 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:23.781 17:24:32 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:23.781 17:24:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:23.781 17:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.781 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:20:24.038 nvme0n1 00:20:24.039 17:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.039 17:24:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.039 17:24:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:24.039 17:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.039 17:24:33 -- common/autotest_common.sh@10 -- # set +x 00:20:24.039 17:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.039 17:24:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.039 17:24:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.039 17:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.039 17:24:33 -- common/autotest_common.sh@10 -- # set +x 00:20:24.039 17:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.039 17:24:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:24.039 17:24:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:24.039 17:24:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:24.039 17:24:33 -- host/auth.sh@44 -- # digest=sha384 00:20:24.039 17:24:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:24.039 17:24:33 -- host/auth.sh@44 -- # keyid=1 00:20:24.039 17:24:33 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:24.039 17:24:33 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:24.039 17:24:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:24.039 17:24:33 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:24.039 17:24:33 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:20:24.039 17:24:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:24.039 17:24:33 -- host/auth.sh@68 -- # digest=sha384 00:20:24.039 17:24:33 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:24.039 17:24:33 -- host/auth.sh@68 -- # keyid=1 00:20:24.039 17:24:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.039 17:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.039 17:24:33 -- common/autotest_common.sh@10 -- # set +x 00:20:24.039 17:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.039 17:24:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:24.039 17:24:33 -- nvmf/common.sh@717 -- # local ip 00:20:24.039 17:24:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:24.039 17:24:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:24.039 17:24:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.039 17:24:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.039 17:24:33 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:24.039 17:24:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:24.039 17:24:33 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:24.039 17:24:33 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:24.039 17:24:33 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:24.039 17:24:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:24.039 17:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.039 17:24:33 -- common/autotest_common.sh@10 -- # set +x 00:20:24.297 nvme0n1 00:20:24.297 17:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.297 17:24:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.297 17:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.297 17:24:33 -- common/autotest_common.sh@10 -- # set +x 00:20:24.297 17:24:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:24.297 17:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.297 17:24:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.297 17:24:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.297 17:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.297 17:24:33 -- common/autotest_common.sh@10 -- # set +x 00:20:24.297 17:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.297 17:24:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:24.297 17:24:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:24.297 17:24:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:24.297 17:24:33 -- host/auth.sh@44 -- # digest=sha384 00:20:24.297 17:24:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:24.297 17:24:33 -- host/auth.sh@44 -- # keyid=2 00:20:24.297 17:24:33 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:24.297 17:24:33 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:24.297 17:24:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:24.297 17:24:33 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:24.297 17:24:33 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:20:24.297 17:24:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:24.297 17:24:33 -- host/auth.sh@68 -- # digest=sha384 00:20:24.297 17:24:33 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:24.297 17:24:33 -- host/auth.sh@68 -- # keyid=2 00:20:24.297 17:24:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.297 17:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.297 17:24:33 -- common/autotest_common.sh@10 -- # set +x 00:20:24.297 17:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.297 17:24:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:24.297 17:24:33 -- nvmf/common.sh@717 -- # local ip 00:20:24.297 17:24:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:24.297 17:24:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:24.297 17:24:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.297 17:24:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.297 17:24:33 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:24.297 17:24:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:24.297 17:24:33 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:24.297 17:24:33 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:24.297 17:24:33 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:24.297 17:24:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:24.297 17:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.297 17:24:33 -- common/autotest_common.sh@10 -- # set +x 00:20:24.554 nvme0n1 00:20:24.554 17:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.554 17:24:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.554 17:24:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:24.554 17:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.554 17:24:33 -- common/autotest_common.sh@10 -- # set +x 00:20:24.554 17:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.554 17:24:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.554 17:24:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.554 17:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.554 17:24:33 -- common/autotest_common.sh@10 -- # set +x 00:20:24.554 17:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.554 17:24:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:24.554 17:24:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:24.554 17:24:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:24.554 17:24:33 -- host/auth.sh@44 -- # digest=sha384 00:20:24.554 17:24:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:24.554 17:24:33 -- host/auth.sh@44 -- # keyid=3 00:20:24.554 17:24:33 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:24.554 17:24:33 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:24.554 17:24:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:24.554 17:24:33 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:24.554 17:24:33 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:20:24.554 17:24:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:24.554 17:24:33 -- host/auth.sh@68 -- # digest=sha384 00:20:24.554 17:24:33 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:24.554 17:24:33 -- host/auth.sh@68 -- # keyid=3 00:20:24.554 17:24:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.554 17:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.554 17:24:33 -- common/autotest_common.sh@10 -- # set +x 00:20:24.812 17:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.812 17:24:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:24.812 17:24:33 -- nvmf/common.sh@717 -- # local ip 00:20:24.812 17:24:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:24.812 17:24:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:24.812 17:24:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.812 17:24:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.812 17:24:33 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:24.812 17:24:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:24.812 17:24:33 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:24.812 17:24:33 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:24.812 17:24:33 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:24.812 17:24:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:24.812 17:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.812 17:24:33 -- common/autotest_common.sh@10 -- # set +x 00:20:24.812 nvme0n1 00:20:24.812 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.812 17:24:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.812 17:24:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:24.812 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.812 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:24.812 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.812 17:24:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.812 17:24:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.812 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.812 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.070 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.070 17:24:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:25.070 17:24:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:25.070 17:24:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:25.070 17:24:34 -- host/auth.sh@44 -- # digest=sha384 00:20:25.070 17:24:34 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:25.070 17:24:34 -- host/auth.sh@44 -- # keyid=4 00:20:25.070 17:24:34 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:25.070 17:24:34 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:25.070 17:24:34 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:25.070 17:24:34 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:25.070 17:24:34 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:20:25.070 17:24:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:25.070 17:24:34 -- host/auth.sh@68 -- # digest=sha384 00:20:25.070 17:24:34 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:25.070 17:24:34 -- host/auth.sh@68 -- # keyid=4 00:20:25.070 17:24:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.070 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.070 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.070 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.070 17:24:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:25.070 17:24:34 -- nvmf/common.sh@717 -- # local ip 00:20:25.070 17:24:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:25.070 17:24:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:25.070 17:24:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.070 17:24:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.070 17:24:34 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:25.070 17:24:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:25.070 17:24:34 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:25.070 17:24:34 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:25.070 17:24:34 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:25.071 17:24:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:25.071 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.071 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.071 nvme0n1 00:20:25.071 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.071 17:24:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.071 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.071 17:24:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:25.071 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.071 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.329 17:24:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.329 17:24:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.329 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.329 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.329 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.329 17:24:34 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.329 17:24:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:25.329 17:24:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:25.329 17:24:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:25.329 17:24:34 -- host/auth.sh@44 -- # digest=sha384 00:20:25.329 17:24:34 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:25.329 17:24:34 -- host/auth.sh@44 -- # keyid=0 00:20:25.329 17:24:34 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:25.329 17:24:34 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:25.329 17:24:34 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:25.329 17:24:34 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:25.329 17:24:34 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:20:25.329 17:24:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:25.329 17:24:34 -- host/auth.sh@68 -- # digest=sha384 00:20:25.329 17:24:34 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:25.329 17:24:34 -- host/auth.sh@68 -- # keyid=0 00:20:25.329 17:24:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.329 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.329 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.329 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.329 17:24:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:25.329 17:24:34 -- nvmf/common.sh@717 -- # local ip 00:20:25.329 17:24:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:25.329 17:24:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:25.329 17:24:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.329 17:24:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.329 17:24:34 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:25.329 17:24:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:25.329 17:24:34 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:25.329 17:24:34 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:25.329 17:24:34 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:25.329 17:24:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:25.329 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.329 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.588 nvme0n1 00:20:25.588 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.588 17:24:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.588 17:24:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:25.588 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.588 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.588 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.588 17:24:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.588 17:24:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.588 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.588 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.589 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.589 17:24:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:25.589 17:24:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:25.589 17:24:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:25.589 17:24:34 -- host/auth.sh@44 -- # digest=sha384 00:20:25.589 17:24:34 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:25.589 17:24:34 -- host/auth.sh@44 -- # keyid=1 00:20:25.589 17:24:34 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:25.589 17:24:34 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:25.589 17:24:34 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:25.589 17:24:34 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:25.589 17:24:34 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:20:25.589 17:24:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:25.589 17:24:34 -- host/auth.sh@68 -- # digest=sha384 00:20:25.589 17:24:34 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:25.589 17:24:34 -- host/auth.sh@68 -- # keyid=1 00:20:25.589 17:24:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.589 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.589 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.589 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.589 17:24:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:25.589 17:24:34 -- nvmf/common.sh@717 -- # local ip 00:20:25.589 17:24:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:25.589 17:24:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:25.589 17:24:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.589 17:24:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.589 17:24:34 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:25.589 17:24:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:25.589 17:24:34 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:25.589 17:24:34 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:25.589 17:24:34 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:25.589 17:24:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:25.589 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.589 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.847 nvme0n1 00:20:25.847 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.847 17:24:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.847 17:24:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:25.847 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.847 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.847 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.847 17:24:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.847 17:24:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.847 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.847 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.847 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.847 17:24:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:25.847 17:24:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:25.847 17:24:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:25.847 17:24:34 -- host/auth.sh@44 -- # digest=sha384 00:20:25.847 17:24:34 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:25.847 17:24:34 -- host/auth.sh@44 -- # keyid=2 00:20:25.847 17:24:34 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:25.847 17:24:34 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:25.847 17:24:34 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:25.847 17:24:34 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:25.847 17:24:34 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:20:25.847 17:24:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:25.847 17:24:34 -- host/auth.sh@68 -- # digest=sha384 00:20:25.847 17:24:34 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:25.847 17:24:34 -- host/auth.sh@68 -- # keyid=2 00:20:25.848 17:24:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.848 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.848 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.848 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.848 17:24:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:25.848 17:24:34 -- nvmf/common.sh@717 -- # local ip 00:20:25.848 17:24:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:25.848 17:24:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:25.848 17:24:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.848 17:24:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.848 17:24:34 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:25.848 17:24:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:25.848 17:24:35 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:25.848 17:24:35 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:25.848 17:24:35 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:25.848 17:24:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:25.848 17:24:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.848 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.106 nvme0n1 00:20:26.106 17:24:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.106 17:24:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.106 17:24:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:26.106 17:24:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.106 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.106 17:24:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.106 17:24:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.106 17:24:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.106 17:24:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.106 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.106 17:24:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.106 17:24:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:26.106 17:24:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:26.106 17:24:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:26.106 17:24:35 -- host/auth.sh@44 -- # digest=sha384 00:20:26.106 17:24:35 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:26.106 17:24:35 -- host/auth.sh@44 -- # keyid=3 00:20:26.106 17:24:35 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:26.106 17:24:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:26.106 17:24:35 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:26.106 17:24:35 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:26.106 17:24:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:20:26.106 17:24:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:26.106 17:24:35 -- host/auth.sh@68 -- # digest=sha384 00:20:26.106 17:24:35 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:26.106 17:24:35 -- host/auth.sh@68 -- # keyid=3 00:20:26.106 17:24:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.107 17:24:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.107 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.107 17:24:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.107 17:24:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:26.107 17:24:35 -- nvmf/common.sh@717 -- # local ip 00:20:26.107 17:24:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:26.107 17:24:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:26.107 17:24:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.107 17:24:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.107 17:24:35 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:26.107 17:24:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:26.107 17:24:35 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:26.107 17:24:35 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:26.107 17:24:35 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:26.107 17:24:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:26.107 17:24:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.107 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.365 nvme0n1 00:20:26.365 17:24:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.365 17:24:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.365 17:24:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.365 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.365 17:24:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:26.365 17:24:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.365 17:24:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.365 17:24:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.365 17:24:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.365 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.365 17:24:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.365 17:24:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:26.365 17:24:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:26.365 17:24:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:26.365 17:24:35 -- host/auth.sh@44 -- # digest=sha384 00:20:26.365 17:24:35 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:26.365 17:24:35 -- host/auth.sh@44 -- # keyid=4 00:20:26.365 17:24:35 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:26.365 17:24:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:26.623 17:24:35 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:26.623 17:24:35 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:26.623 17:24:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:20:26.623 17:24:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:26.623 17:24:35 -- host/auth.sh@68 -- # digest=sha384 00:20:26.623 17:24:35 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:26.623 17:24:35 -- host/auth.sh@68 -- # keyid=4 00:20:26.623 17:24:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.623 17:24:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.623 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.623 17:24:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.623 17:24:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:26.623 17:24:35 -- nvmf/common.sh@717 -- # local ip 00:20:26.623 17:24:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:26.623 17:24:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:26.623 17:24:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.623 17:24:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.623 17:24:35 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:26.623 17:24:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:26.623 17:24:35 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:26.623 17:24:35 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:26.623 17:24:35 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:26.623 17:24:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:26.623 17:24:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.623 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.623 nvme0n1 00:20:26.623 17:24:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.623 17:24:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.623 17:24:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:26.623 17:24:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.623 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.882 17:24:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.882 17:24:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.882 17:24:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.882 17:24:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.882 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.882 17:24:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.882 17:24:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.882 17:24:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:26.882 17:24:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:26.882 17:24:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:26.882 17:24:35 -- host/auth.sh@44 -- # digest=sha384 00:20:26.882 17:24:35 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:26.882 17:24:35 -- host/auth.sh@44 -- # keyid=0 00:20:26.882 17:24:35 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:26.882 17:24:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:26.882 17:24:35 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:26.882 17:24:35 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:26.882 17:24:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:20:26.882 17:24:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:26.882 17:24:35 -- host/auth.sh@68 -- # digest=sha384 00:20:26.882 17:24:35 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:26.882 17:24:35 -- host/auth.sh@68 -- # keyid=0 00:20:26.882 17:24:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:26.882 17:24:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.882 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.882 17:24:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.882 17:24:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:26.882 17:24:35 -- nvmf/common.sh@717 -- # local ip 00:20:26.882 17:24:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:26.882 17:24:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:26.882 17:24:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.882 17:24:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.882 17:24:35 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:26.882 17:24:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:26.882 17:24:35 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:26.882 17:24:35 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:26.882 17:24:35 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:26.882 17:24:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:26.882 17:24:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.882 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:20:27.141 nvme0n1 00:20:27.141 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.141 17:24:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.141 17:24:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:27.141 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.141 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:20:27.141 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.141 17:24:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.141 17:24:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.141 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.141 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:20:27.141 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.141 17:24:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:27.141 17:24:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:27.141 17:24:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:27.141 17:24:36 -- host/auth.sh@44 -- # digest=sha384 00:20:27.141 17:24:36 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:27.141 17:24:36 -- host/auth.sh@44 -- # keyid=1 00:20:27.141 17:24:36 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:27.141 17:24:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:27.141 17:24:36 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:27.141 17:24:36 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:27.141 17:24:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:20:27.141 17:24:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:27.141 17:24:36 -- host/auth.sh@68 -- # digest=sha384 00:20:27.141 17:24:36 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:27.141 17:24:36 -- host/auth.sh@68 -- # keyid=1 00:20:27.141 17:24:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.141 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.141 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:20:27.141 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.141 17:24:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:27.141 17:24:36 -- nvmf/common.sh@717 -- # local ip 00:20:27.141 17:24:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:27.141 17:24:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:27.141 17:24:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.141 17:24:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.141 17:24:36 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:27.141 17:24:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:27.141 17:24:36 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:27.141 17:24:36 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:27.141 17:24:36 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:27.141 17:24:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:27.141 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.141 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:20:27.399 nvme0n1 00:20:27.399 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.399 17:24:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.399 17:24:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:27.399 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.399 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:20:27.399 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.657 17:24:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.657 17:24:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.657 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.657 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:20:27.657 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.657 17:24:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:27.657 17:24:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:27.657 17:24:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:27.657 17:24:36 -- host/auth.sh@44 -- # digest=sha384 00:20:27.657 17:24:36 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:27.657 17:24:36 -- host/auth.sh@44 -- # keyid=2 00:20:27.657 17:24:36 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:27.657 17:24:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:27.657 17:24:36 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:27.657 17:24:36 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:27.657 17:24:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:20:27.657 17:24:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:27.657 17:24:36 -- host/auth.sh@68 -- # digest=sha384 00:20:27.657 17:24:36 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:27.657 17:24:36 -- host/auth.sh@68 -- # keyid=2 00:20:27.657 17:24:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.657 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.657 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:20:27.657 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.657 17:24:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:27.657 17:24:36 -- nvmf/common.sh@717 -- # local ip 00:20:27.657 17:24:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:27.657 17:24:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:27.657 17:24:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.657 17:24:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.657 17:24:36 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:27.657 17:24:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:27.657 17:24:36 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:27.657 17:24:36 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:27.657 17:24:36 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:27.657 17:24:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:27.657 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.657 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:20:27.916 nvme0n1 00:20:27.916 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.916 17:24:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.916 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.916 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:20:27.916 17:24:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:27.916 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.916 17:24:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.916 17:24:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.916 17:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.916 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:20:27.916 17:24:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.916 17:24:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:27.916 17:24:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:27.916 17:24:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:27.916 17:24:37 -- host/auth.sh@44 -- # digest=sha384 00:20:27.916 17:24:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:27.916 17:24:37 -- host/auth.sh@44 -- # keyid=3 00:20:27.916 17:24:37 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:27.916 17:24:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:27.916 17:24:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:27.916 17:24:37 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:27.916 17:24:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:20:27.916 17:24:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:27.916 17:24:37 -- host/auth.sh@68 -- # digest=sha384 00:20:27.916 17:24:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:27.916 17:24:37 -- host/auth.sh@68 -- # keyid=3 00:20:27.916 17:24:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.916 17:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.916 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:20:27.916 17:24:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.916 17:24:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:27.916 17:24:37 -- nvmf/common.sh@717 -- # local ip 00:20:27.916 17:24:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:27.916 17:24:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:27.916 17:24:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.916 17:24:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.916 17:24:37 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:27.916 17:24:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:27.916 17:24:37 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:27.916 17:24:37 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:27.916 17:24:37 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:27.916 17:24:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:27.916 17:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.916 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:20:28.173 nvme0n1 00:20:28.173 17:24:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.173 17:24:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.173 17:24:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:28.173 17:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.173 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:20:28.173 17:24:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.173 17:24:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.173 17:24:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.173 17:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.173 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:20:28.432 17:24:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.432 17:24:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:28.432 17:24:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:28.432 17:24:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:28.432 17:24:37 -- host/auth.sh@44 -- # digest=sha384 00:20:28.432 17:24:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:28.432 17:24:37 -- host/auth.sh@44 -- # keyid=4 00:20:28.432 17:24:37 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:28.432 17:24:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:28.432 17:24:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:28.432 17:24:37 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:28.432 17:24:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:20:28.432 17:24:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:28.432 17:24:37 -- host/auth.sh@68 -- # digest=sha384 00:20:28.432 17:24:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:28.432 17:24:37 -- host/auth.sh@68 -- # keyid=4 00:20:28.432 17:24:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.432 17:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.432 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:20:28.432 17:24:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.432 17:24:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:28.432 17:24:37 -- nvmf/common.sh@717 -- # local ip 00:20:28.432 17:24:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:28.432 17:24:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:28.432 17:24:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.432 17:24:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.432 17:24:37 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:28.432 17:24:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:28.432 17:24:37 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:28.432 17:24:37 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:28.432 17:24:37 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:28.432 17:24:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:28.432 17:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.432 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:20:28.690 nvme0n1 00:20:28.690 17:24:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.690 17:24:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.690 17:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.690 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:20:28.690 17:24:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:28.690 17:24:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.690 17:24:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.690 17:24:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.690 17:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.690 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:20:28.690 17:24:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.690 17:24:37 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.690 17:24:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:28.690 17:24:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:28.690 17:24:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:28.690 17:24:37 -- host/auth.sh@44 -- # digest=sha384 00:20:28.690 17:24:37 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:28.690 17:24:37 -- host/auth.sh@44 -- # keyid=0 00:20:28.690 17:24:37 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:28.690 17:24:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:28.690 17:24:37 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:28.690 17:24:37 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:28.690 17:24:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:20:28.690 17:24:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:28.690 17:24:37 -- host/auth.sh@68 -- # digest=sha384 00:20:28.690 17:24:37 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:28.690 17:24:37 -- host/auth.sh@68 -- # keyid=0 00:20:28.690 17:24:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.690 17:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.691 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:20:28.691 17:24:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.691 17:24:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:28.691 17:24:37 -- nvmf/common.sh@717 -- # local ip 00:20:28.691 17:24:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:28.691 17:24:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:28.691 17:24:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.691 17:24:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.691 17:24:37 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:28.691 17:24:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:28.691 17:24:37 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:28.691 17:24:37 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:28.691 17:24:37 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:28.691 17:24:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:28.691 17:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.691 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:20:29.258 nvme0n1 00:20:29.258 17:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.258 17:24:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.258 17:24:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:29.258 17:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.258 17:24:38 -- common/autotest_common.sh@10 -- # set +x 00:20:29.258 17:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.258 17:24:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.258 17:24:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.258 17:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.258 17:24:38 -- common/autotest_common.sh@10 -- # set +x 00:20:29.258 17:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.258 17:24:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:29.258 17:24:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:29.258 17:24:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:29.258 17:24:38 -- host/auth.sh@44 -- # digest=sha384 00:20:29.258 17:24:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:29.258 17:24:38 -- host/auth.sh@44 -- # keyid=1 00:20:29.258 17:24:38 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:29.258 17:24:38 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:29.258 17:24:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:29.258 17:24:38 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:29.258 17:24:38 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:20:29.258 17:24:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:29.258 17:24:38 -- host/auth.sh@68 -- # digest=sha384 00:20:29.258 17:24:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:29.258 17:24:38 -- host/auth.sh@68 -- # keyid=1 00:20:29.258 17:24:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:29.258 17:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.258 17:24:38 -- common/autotest_common.sh@10 -- # set +x 00:20:29.258 17:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.258 17:24:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:29.258 17:24:38 -- nvmf/common.sh@717 -- # local ip 00:20:29.258 17:24:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:29.258 17:24:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:29.258 17:24:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.258 17:24:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.258 17:24:38 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:29.258 17:24:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:29.258 17:24:38 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:29.258 17:24:38 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:29.258 17:24:38 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:29.258 17:24:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:29.258 17:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.258 17:24:38 -- common/autotest_common.sh@10 -- # set +x 00:20:29.825 nvme0n1 00:20:29.825 17:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.825 17:24:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.825 17:24:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:29.825 17:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.825 17:24:38 -- common/autotest_common.sh@10 -- # set +x 00:20:29.825 17:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.825 17:24:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.825 17:24:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.825 17:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.825 17:24:38 -- common/autotest_common.sh@10 -- # set +x 00:20:29.825 17:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.825 17:24:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:29.825 17:24:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:29.825 17:24:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:29.825 17:24:38 -- host/auth.sh@44 -- # digest=sha384 00:20:29.825 17:24:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:29.825 17:24:38 -- host/auth.sh@44 -- # keyid=2 00:20:29.825 17:24:38 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:29.825 17:24:38 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:29.825 17:24:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:29.825 17:24:38 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:29.825 17:24:38 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:20:29.825 17:24:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:29.825 17:24:38 -- host/auth.sh@68 -- # digest=sha384 00:20:29.825 17:24:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:29.825 17:24:38 -- host/auth.sh@68 -- # keyid=2 00:20:29.825 17:24:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:29.825 17:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.825 17:24:38 -- common/autotest_common.sh@10 -- # set +x 00:20:29.825 17:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.825 17:24:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:29.825 17:24:38 -- nvmf/common.sh@717 -- # local ip 00:20:29.825 17:24:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:29.825 17:24:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:29.825 17:24:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.825 17:24:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.825 17:24:38 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:29.825 17:24:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:29.825 17:24:38 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:29.825 17:24:38 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:29.825 17:24:38 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:29.825 17:24:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:29.825 17:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.825 17:24:38 -- common/autotest_common.sh@10 -- # set +x 00:20:30.084 nvme0n1 00:20:30.084 17:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.084 17:24:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.084 17:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.084 17:24:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:30.084 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:20:30.084 17:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.084 17:24:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.084 17:24:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.084 17:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.084 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:20:30.342 17:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.342 17:24:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:30.342 17:24:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:30.342 17:24:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:30.342 17:24:39 -- host/auth.sh@44 -- # digest=sha384 00:20:30.342 17:24:39 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:30.342 17:24:39 -- host/auth.sh@44 -- # keyid=3 00:20:30.342 17:24:39 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:30.342 17:24:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:30.342 17:24:39 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:30.342 17:24:39 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:30.342 17:24:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:20:30.342 17:24:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:30.342 17:24:39 -- host/auth.sh@68 -- # digest=sha384 00:20:30.342 17:24:39 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:30.342 17:24:39 -- host/auth.sh@68 -- # keyid=3 00:20:30.342 17:24:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.342 17:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.342 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:20:30.342 17:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.342 17:24:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:30.342 17:24:39 -- nvmf/common.sh@717 -- # local ip 00:20:30.342 17:24:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:30.342 17:24:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:30.342 17:24:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.342 17:24:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.342 17:24:39 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:30.342 17:24:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:30.342 17:24:39 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:30.342 17:24:39 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:30.342 17:24:39 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:30.342 17:24:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:30.342 17:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.342 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:20:30.600 nvme0n1 00:20:30.600 17:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.600 17:24:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.600 17:24:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:30.600 17:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.600 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:20:30.600 17:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.600 17:24:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.600 17:24:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.600 17:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.600 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:20:30.858 17:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.858 17:24:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:30.858 17:24:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:30.858 17:24:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:30.858 17:24:39 -- host/auth.sh@44 -- # digest=sha384 00:20:30.858 17:24:39 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:30.858 17:24:39 -- host/auth.sh@44 -- # keyid=4 00:20:30.858 17:24:39 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:30.858 17:24:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:30.858 17:24:39 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:30.858 17:24:39 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:30.858 17:24:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:20:30.858 17:24:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:30.858 17:24:39 -- host/auth.sh@68 -- # digest=sha384 00:20:30.858 17:24:39 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:30.858 17:24:39 -- host/auth.sh@68 -- # keyid=4 00:20:30.858 17:24:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.858 17:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.858 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:20:30.858 17:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.858 17:24:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:30.858 17:24:39 -- nvmf/common.sh@717 -- # local ip 00:20:30.858 17:24:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:30.858 17:24:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:30.858 17:24:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.858 17:24:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.858 17:24:39 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:30.858 17:24:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:30.858 17:24:39 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:30.858 17:24:39 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:30.858 17:24:39 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:30.858 17:24:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:30.858 17:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.858 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:20:31.116 nvme0n1 00:20:31.116 17:24:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.116 17:24:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.116 17:24:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:31.116 17:24:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.116 17:24:40 -- common/autotest_common.sh@10 -- # set +x 00:20:31.116 17:24:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.116 17:24:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.116 17:24:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.116 17:24:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.116 17:24:40 -- common/autotest_common.sh@10 -- # set +x 00:20:31.374 17:24:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.374 17:24:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.374 17:24:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:31.374 17:24:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:31.374 17:24:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:31.374 17:24:40 -- host/auth.sh@44 -- # digest=sha384 00:20:31.374 17:24:40 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:31.374 17:24:40 -- host/auth.sh@44 -- # keyid=0 00:20:31.374 17:24:40 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:31.374 17:24:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:31.374 17:24:40 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:31.374 17:24:40 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:31.374 17:24:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:20:31.374 17:24:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:31.374 17:24:40 -- host/auth.sh@68 -- # digest=sha384 00:20:31.374 17:24:40 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:31.374 17:24:40 -- host/auth.sh@68 -- # keyid=0 00:20:31.374 17:24:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:31.374 17:24:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.374 17:24:40 -- common/autotest_common.sh@10 -- # set +x 00:20:31.374 17:24:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.374 17:24:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:31.374 17:24:40 -- nvmf/common.sh@717 -- # local ip 00:20:31.374 17:24:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:31.374 17:24:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:31.374 17:24:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.374 17:24:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.374 17:24:40 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:31.374 17:24:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:31.374 17:24:40 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:31.374 17:24:40 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:31.374 17:24:40 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:31.374 17:24:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:31.374 17:24:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.374 17:24:40 -- common/autotest_common.sh@10 -- # set +x 00:20:31.941 nvme0n1 00:20:31.941 17:24:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.941 17:24:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.941 17:24:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.941 17:24:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:31.941 17:24:40 -- common/autotest_common.sh@10 -- # set +x 00:20:31.941 17:24:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.941 17:24:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.941 17:24:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.941 17:24:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.941 17:24:41 -- common/autotest_common.sh@10 -- # set +x 00:20:31.941 17:24:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.941 17:24:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:31.941 17:24:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:31.941 17:24:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:31.941 17:24:41 -- host/auth.sh@44 -- # digest=sha384 00:20:31.941 17:24:41 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:31.941 17:24:41 -- host/auth.sh@44 -- # keyid=1 00:20:31.941 17:24:41 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:31.941 17:24:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:31.941 17:24:41 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:31.941 17:24:41 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:31.941 17:24:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:20:31.941 17:24:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:31.941 17:24:41 -- host/auth.sh@68 -- # digest=sha384 00:20:31.941 17:24:41 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:31.941 17:24:41 -- host/auth.sh@68 -- # keyid=1 00:20:31.941 17:24:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:31.941 17:24:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.941 17:24:41 -- common/autotest_common.sh@10 -- # set +x 00:20:31.941 17:24:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.941 17:24:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:31.941 17:24:41 -- nvmf/common.sh@717 -- # local ip 00:20:31.941 17:24:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:31.941 17:24:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:31.941 17:24:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.941 17:24:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.941 17:24:41 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:31.941 17:24:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:31.941 17:24:41 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:31.941 17:24:41 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:31.941 17:24:41 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:31.941 17:24:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:31.941 17:24:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.941 17:24:41 -- common/autotest_common.sh@10 -- # set +x 00:20:32.508 nvme0n1 00:20:32.508 17:24:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.508 17:24:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.508 17:24:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.508 17:24:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:32.508 17:24:41 -- common/autotest_common.sh@10 -- # set +x 00:20:32.508 17:24:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.508 17:24:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.508 17:24:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.508 17:24:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.508 17:24:41 -- common/autotest_common.sh@10 -- # set +x 00:20:32.508 17:24:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.508 17:24:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:32.508 17:24:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:32.508 17:24:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:32.508 17:24:41 -- host/auth.sh@44 -- # digest=sha384 00:20:32.508 17:24:41 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:32.508 17:24:41 -- host/auth.sh@44 -- # keyid=2 00:20:32.508 17:24:41 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:32.508 17:24:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:32.508 17:24:41 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:32.508 17:24:41 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:32.508 17:24:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:20:32.508 17:24:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:32.508 17:24:41 -- host/auth.sh@68 -- # digest=sha384 00:20:32.509 17:24:41 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:32.509 17:24:41 -- host/auth.sh@68 -- # keyid=2 00:20:32.509 17:24:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:32.509 17:24:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.509 17:24:41 -- common/autotest_common.sh@10 -- # set +x 00:20:32.509 17:24:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.509 17:24:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:32.509 17:24:41 -- nvmf/common.sh@717 -- # local ip 00:20:32.509 17:24:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:32.767 17:24:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:32.767 17:24:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.767 17:24:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.767 17:24:41 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:32.767 17:24:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:32.767 17:24:41 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:32.767 17:24:41 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:32.767 17:24:41 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:32.767 17:24:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:32.767 17:24:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.767 17:24:41 -- common/autotest_common.sh@10 -- # set +x 00:20:33.334 nvme0n1 00:20:33.334 17:24:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.334 17:24:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.334 17:24:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:33.334 17:24:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.334 17:24:42 -- common/autotest_common.sh@10 -- # set +x 00:20:33.334 17:24:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.334 17:24:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.334 17:24:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.334 17:24:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.334 17:24:42 -- common/autotest_common.sh@10 -- # set +x 00:20:33.334 17:24:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.334 17:24:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:33.334 17:24:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:33.334 17:24:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:33.334 17:24:42 -- host/auth.sh@44 -- # digest=sha384 00:20:33.334 17:24:42 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:33.334 17:24:42 -- host/auth.sh@44 -- # keyid=3 00:20:33.334 17:24:42 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:33.334 17:24:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:33.334 17:24:42 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:33.334 17:24:42 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:33.334 17:24:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:20:33.334 17:24:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:33.334 17:24:42 -- host/auth.sh@68 -- # digest=sha384 00:20:33.334 17:24:42 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:33.334 17:24:42 -- host/auth.sh@68 -- # keyid=3 00:20:33.334 17:24:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:33.334 17:24:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.334 17:24:42 -- common/autotest_common.sh@10 -- # set +x 00:20:33.334 17:24:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.334 17:24:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:33.334 17:24:42 -- nvmf/common.sh@717 -- # local ip 00:20:33.334 17:24:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:33.334 17:24:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:33.334 17:24:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.334 17:24:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.334 17:24:42 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:33.334 17:24:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:33.334 17:24:42 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:33.334 17:24:42 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:33.334 17:24:42 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:33.334 17:24:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:33.334 17:24:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.334 17:24:42 -- common/autotest_common.sh@10 -- # set +x 00:20:33.901 nvme0n1 00:20:33.901 17:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.901 17:24:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.901 17:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.901 17:24:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:33.901 17:24:43 -- common/autotest_common.sh@10 -- # set +x 00:20:33.901 17:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.901 17:24:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.901 17:24:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.901 17:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.901 17:24:43 -- common/autotest_common.sh@10 -- # set +x 00:20:33.901 17:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.901 17:24:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:33.901 17:24:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:33.901 17:24:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:33.902 17:24:43 -- host/auth.sh@44 -- # digest=sha384 00:20:33.902 17:24:43 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:33.902 17:24:43 -- host/auth.sh@44 -- # keyid=4 00:20:33.902 17:24:43 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:33.902 17:24:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:33.902 17:24:43 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:33.902 17:24:43 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:33.902 17:24:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:20:33.902 17:24:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:33.902 17:24:43 -- host/auth.sh@68 -- # digest=sha384 00:20:33.902 17:24:43 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:33.902 17:24:43 -- host/auth.sh@68 -- # keyid=4 00:20:33.902 17:24:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:33.902 17:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.902 17:24:43 -- common/autotest_common.sh@10 -- # set +x 00:20:33.902 17:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.902 17:24:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:33.902 17:24:43 -- nvmf/common.sh@717 -- # local ip 00:20:33.902 17:24:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:33.902 17:24:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:33.902 17:24:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.902 17:24:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.902 17:24:43 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:33.902 17:24:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:33.902 17:24:43 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:33.902 17:24:43 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:33.902 17:24:43 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:33.902 17:24:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:33.902 17:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.902 17:24:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.836 nvme0n1 00:20:34.836 17:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.837 17:24:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.837 17:24:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:34.837 17:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.837 17:24:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.837 17:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.837 17:24:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.837 17:24:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.837 17:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.837 17:24:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.837 17:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.837 17:24:43 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:34.837 17:24:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.837 17:24:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:34.837 17:24:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:34.837 17:24:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:34.837 17:24:43 -- host/auth.sh@44 -- # digest=sha512 00:20:34.837 17:24:43 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:34.837 17:24:43 -- host/auth.sh@44 -- # keyid=0 00:20:34.837 17:24:43 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:34.837 17:24:43 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:34.837 17:24:43 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:34.837 17:24:43 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:34.837 17:24:43 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:20:34.837 17:24:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:34.837 17:24:43 -- host/auth.sh@68 -- # digest=sha512 00:20:34.837 17:24:43 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:34.837 17:24:43 -- host/auth.sh@68 -- # keyid=0 00:20:34.837 17:24:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:34.837 17:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.837 17:24:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.837 17:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.837 17:24:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:34.837 17:24:43 -- nvmf/common.sh@717 -- # local ip 00:20:34.837 17:24:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:34.837 17:24:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:34.837 17:24:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.837 17:24:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.837 17:24:43 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:34.837 17:24:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:34.837 17:24:43 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:34.837 17:24:43 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:34.837 17:24:43 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:34.837 17:24:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:34.837 17:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.837 17:24:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.837 nvme0n1 00:20:34.837 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.837 17:24:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.837 17:24:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:34.837 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.837 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:34.837 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.837 17:24:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.837 17:24:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.837 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.837 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.095 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.095 17:24:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:35.095 17:24:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:35.095 17:24:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:35.095 17:24:44 -- host/auth.sh@44 -- # digest=sha512 00:20:35.095 17:24:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.095 17:24:44 -- host/auth.sh@44 -- # keyid=1 00:20:35.095 17:24:44 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:35.095 17:24:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:35.095 17:24:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:35.095 17:24:44 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:35.095 17:24:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:20:35.095 17:24:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:35.095 17:24:44 -- host/auth.sh@68 -- # digest=sha512 00:20:35.095 17:24:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:35.095 17:24:44 -- host/auth.sh@68 -- # keyid=1 00:20:35.095 17:24:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:35.095 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.095 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.095 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.095 17:24:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:35.095 17:24:44 -- nvmf/common.sh@717 -- # local ip 00:20:35.095 17:24:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:35.095 17:24:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:35.095 17:24:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.095 17:24:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.095 17:24:44 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:35.095 17:24:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:35.095 17:24:44 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:35.095 17:24:44 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:35.095 17:24:44 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:35.095 17:24:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:35.095 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.095 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.095 nvme0n1 00:20:35.095 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.096 17:24:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:35.096 17:24:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.096 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.096 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.096 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.354 17:24:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.354 17:24:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.354 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.354 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.354 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.354 17:24:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:35.354 17:24:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:35.354 17:24:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:35.354 17:24:44 -- host/auth.sh@44 -- # digest=sha512 00:20:35.354 17:24:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.354 17:24:44 -- host/auth.sh@44 -- # keyid=2 00:20:35.354 17:24:44 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:35.354 17:24:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:35.354 17:24:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:35.354 17:24:44 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:35.354 17:24:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:20:35.354 17:24:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:35.354 17:24:44 -- host/auth.sh@68 -- # digest=sha512 00:20:35.354 17:24:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:35.354 17:24:44 -- host/auth.sh@68 -- # keyid=2 00:20:35.354 17:24:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:35.354 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.354 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.354 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.354 17:24:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:35.354 17:24:44 -- nvmf/common.sh@717 -- # local ip 00:20:35.354 17:24:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:35.354 17:24:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:35.354 17:24:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.354 17:24:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.354 17:24:44 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:35.354 17:24:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:35.354 17:24:44 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:35.354 17:24:44 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:35.354 17:24:44 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:35.354 17:24:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:35.354 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.354 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.354 nvme0n1 00:20:35.354 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.354 17:24:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.354 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.354 17:24:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:35.354 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.612 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.612 17:24:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.612 17:24:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.612 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.612 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.612 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.612 17:24:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:35.612 17:24:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:35.612 17:24:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:35.612 17:24:44 -- host/auth.sh@44 -- # digest=sha512 00:20:35.612 17:24:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.612 17:24:44 -- host/auth.sh@44 -- # keyid=3 00:20:35.612 17:24:44 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:35.612 17:24:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:35.612 17:24:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:35.612 17:24:44 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:35.612 17:24:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:20:35.612 17:24:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:35.612 17:24:44 -- host/auth.sh@68 -- # digest=sha512 00:20:35.612 17:24:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:35.612 17:24:44 -- host/auth.sh@68 -- # keyid=3 00:20:35.612 17:24:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:35.612 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.613 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.613 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.613 17:24:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:35.613 17:24:44 -- nvmf/common.sh@717 -- # local ip 00:20:35.613 17:24:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:35.613 17:24:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:35.613 17:24:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.613 17:24:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.613 17:24:44 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:35.613 17:24:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:35.613 17:24:44 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:35.613 17:24:44 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:35.613 17:24:44 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:35.613 17:24:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:35.613 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.613 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.871 nvme0n1 00:20:35.871 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.871 17:24:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.871 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.871 17:24:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:35.871 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.871 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.871 17:24:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.871 17:24:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.871 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.871 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.871 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.871 17:24:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:35.871 17:24:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:35.871 17:24:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:35.871 17:24:44 -- host/auth.sh@44 -- # digest=sha512 00:20:35.871 17:24:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.871 17:24:44 -- host/auth.sh@44 -- # keyid=4 00:20:35.871 17:24:44 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:35.871 17:24:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:35.871 17:24:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:35.871 17:24:44 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:35.871 17:24:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:20:35.871 17:24:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:35.871 17:24:44 -- host/auth.sh@68 -- # digest=sha512 00:20:35.871 17:24:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:35.871 17:24:44 -- host/auth.sh@68 -- # keyid=4 00:20:35.871 17:24:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:35.871 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.871 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.871 17:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.871 17:24:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:35.871 17:24:44 -- nvmf/common.sh@717 -- # local ip 00:20:35.871 17:24:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:35.871 17:24:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:35.871 17:24:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.871 17:24:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.871 17:24:44 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:35.871 17:24:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:35.871 17:24:44 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:35.871 17:24:44 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:35.871 17:24:44 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:35.871 17:24:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:35.871 17:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.871 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:36.129 nvme0n1 00:20:36.129 17:24:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.129 17:24:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.129 17:24:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.130 17:24:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.130 17:24:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.130 17:24:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.130 17:24:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.130 17:24:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.130 17:24:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.130 17:24:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.130 17:24:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.130 17:24:45 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.130 17:24:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.130 17:24:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:36.130 17:24:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.130 17:24:45 -- host/auth.sh@44 -- # digest=sha512 00:20:36.130 17:24:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.130 17:24:45 -- host/auth.sh@44 -- # keyid=0 00:20:36.130 17:24:45 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:36.130 17:24:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:36.130 17:24:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:36.130 17:24:45 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:36.130 17:24:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:20:36.130 17:24:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.130 17:24:45 -- host/auth.sh@68 -- # digest=sha512 00:20:36.130 17:24:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:36.130 17:24:45 -- host/auth.sh@68 -- # keyid=0 00:20:36.130 17:24:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.130 17:24:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.130 17:24:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.130 17:24:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.130 17:24:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.130 17:24:45 -- nvmf/common.sh@717 -- # local ip 00:20:36.130 17:24:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.130 17:24:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.130 17:24:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.130 17:24:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.130 17:24:45 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:36.130 17:24:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:36.130 17:24:45 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:36.130 17:24:45 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:36.130 17:24:45 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:36.130 17:24:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:36.130 17:24:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.130 17:24:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.389 nvme0n1 00:20:36.389 17:24:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.389 17:24:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.389 17:24:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.389 17:24:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.389 17:24:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.389 17:24:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.389 17:24:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.389 17:24:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.389 17:24:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.389 17:24:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.389 17:24:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.389 17:24:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.389 17:24:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:36.389 17:24:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.389 17:24:45 -- host/auth.sh@44 -- # digest=sha512 00:20:36.389 17:24:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.389 17:24:45 -- host/auth.sh@44 -- # keyid=1 00:20:36.389 17:24:45 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:36.389 17:24:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:36.389 17:24:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:36.389 17:24:45 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:36.389 17:24:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:20:36.389 17:24:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.389 17:24:45 -- host/auth.sh@68 -- # digest=sha512 00:20:36.389 17:24:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:36.389 17:24:45 -- host/auth.sh@68 -- # keyid=1 00:20:36.389 17:24:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.389 17:24:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.389 17:24:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.389 17:24:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.389 17:24:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.389 17:24:45 -- nvmf/common.sh@717 -- # local ip 00:20:36.389 17:24:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.389 17:24:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.389 17:24:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.389 17:24:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.389 17:24:45 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:36.389 17:24:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:36.389 17:24:45 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:36.389 17:24:45 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:36.389 17:24:45 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:36.389 17:24:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:36.389 17:24:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.389 17:24:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.648 nvme0n1 00:20:36.648 17:24:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.648 17:24:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.648 17:24:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.648 17:24:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.648 17:24:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.648 17:24:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.648 17:24:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.648 17:24:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.648 17:24:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.648 17:24:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.648 17:24:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.648 17:24:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.648 17:24:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:36.648 17:24:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.648 17:24:45 -- host/auth.sh@44 -- # digest=sha512 00:20:36.648 17:24:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.648 17:24:45 -- host/auth.sh@44 -- # keyid=2 00:20:36.648 17:24:45 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:36.648 17:24:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:36.648 17:24:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:36.648 17:24:45 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:36.648 17:24:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:20:36.648 17:24:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.648 17:24:45 -- host/auth.sh@68 -- # digest=sha512 00:20:36.648 17:24:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:36.648 17:24:45 -- host/auth.sh@68 -- # keyid=2 00:20:36.648 17:24:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.648 17:24:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.648 17:24:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.648 17:24:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.648 17:24:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.648 17:24:45 -- nvmf/common.sh@717 -- # local ip 00:20:36.648 17:24:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.648 17:24:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.648 17:24:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.648 17:24:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.648 17:24:45 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:36.648 17:24:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:36.648 17:24:45 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:36.648 17:24:45 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:36.648 17:24:45 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:36.648 17:24:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:36.648 17:24:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.648 17:24:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.906 nvme0n1 00:20:36.906 17:24:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.906 17:24:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.906 17:24:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.906 17:24:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.906 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:20:36.906 17:24:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.906 17:24:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.906 17:24:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.906 17:24:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.906 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:20:36.906 17:24:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.906 17:24:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.906 17:24:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:36.906 17:24:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.906 17:24:46 -- host/auth.sh@44 -- # digest=sha512 00:20:36.906 17:24:46 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.906 17:24:46 -- host/auth.sh@44 -- # keyid=3 00:20:36.906 17:24:46 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:36.906 17:24:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:36.906 17:24:46 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:36.906 17:24:46 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:36.906 17:24:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:20:36.906 17:24:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.906 17:24:46 -- host/auth.sh@68 -- # digest=sha512 00:20:36.906 17:24:46 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:36.906 17:24:46 -- host/auth.sh@68 -- # keyid=3 00:20:36.906 17:24:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.906 17:24:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.906 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.164 17:24:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.164 17:24:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:37.164 17:24:46 -- nvmf/common.sh@717 -- # local ip 00:20:37.164 17:24:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:37.164 17:24:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:37.164 17:24:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.164 17:24:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.164 17:24:46 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:37.164 17:24:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:37.164 17:24:46 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:37.164 17:24:46 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:37.164 17:24:46 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:37.164 17:24:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:37.164 17:24:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.164 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.164 nvme0n1 00:20:37.164 17:24:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.164 17:24:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.164 17:24:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.164 17:24:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:37.164 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.164 17:24:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.423 17:24:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.423 17:24:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.423 17:24:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.423 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.423 17:24:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.423 17:24:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:37.423 17:24:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:37.423 17:24:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:37.423 17:24:46 -- host/auth.sh@44 -- # digest=sha512 00:20:37.423 17:24:46 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:37.423 17:24:46 -- host/auth.sh@44 -- # keyid=4 00:20:37.423 17:24:46 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:37.423 17:24:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:37.423 17:24:46 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:37.423 17:24:46 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:37.423 17:24:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:20:37.423 17:24:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:37.423 17:24:46 -- host/auth.sh@68 -- # digest=sha512 00:20:37.423 17:24:46 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:37.423 17:24:46 -- host/auth.sh@68 -- # keyid=4 00:20:37.423 17:24:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:37.423 17:24:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.423 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.423 17:24:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.423 17:24:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:37.423 17:24:46 -- nvmf/common.sh@717 -- # local ip 00:20:37.423 17:24:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:37.423 17:24:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:37.423 17:24:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.423 17:24:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.423 17:24:46 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:37.423 17:24:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:37.423 17:24:46 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:37.423 17:24:46 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:37.423 17:24:46 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:37.423 17:24:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:37.424 17:24:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.424 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.683 nvme0n1 00:20:37.683 17:24:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.683 17:24:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.683 17:24:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:37.683 17:24:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.683 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.683 17:24:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.683 17:24:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.683 17:24:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.683 17:24:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.683 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.683 17:24:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.683 17:24:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.683 17:24:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:37.683 17:24:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:37.683 17:24:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:37.683 17:24:46 -- host/auth.sh@44 -- # digest=sha512 00:20:37.683 17:24:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.683 17:24:46 -- host/auth.sh@44 -- # keyid=0 00:20:37.683 17:24:46 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:37.683 17:24:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:37.683 17:24:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:37.683 17:24:46 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:37.683 17:24:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:20:37.683 17:24:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:37.683 17:24:46 -- host/auth.sh@68 -- # digest=sha512 00:20:37.683 17:24:46 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:37.683 17:24:46 -- host/auth.sh@68 -- # keyid=0 00:20:37.683 17:24:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:37.683 17:24:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.683 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.683 17:24:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.683 17:24:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:37.683 17:24:46 -- nvmf/common.sh@717 -- # local ip 00:20:37.683 17:24:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:37.683 17:24:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:37.683 17:24:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.683 17:24:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.683 17:24:46 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:37.683 17:24:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:37.683 17:24:46 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:37.683 17:24:46 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:37.683 17:24:46 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:37.683 17:24:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:37.683 17:24:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.683 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.942 nvme0n1 00:20:37.942 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.942 17:24:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.942 17:24:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:37.942 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.942 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:20:37.943 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.943 17:24:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.943 17:24:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.943 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.943 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:20:37.943 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.943 17:24:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:37.943 17:24:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:37.943 17:24:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:37.943 17:24:47 -- host/auth.sh@44 -- # digest=sha512 00:20:37.943 17:24:47 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.943 17:24:47 -- host/auth.sh@44 -- # keyid=1 00:20:37.943 17:24:47 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:37.943 17:24:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:37.943 17:24:47 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:37.943 17:24:47 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:37.943 17:24:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:20:37.943 17:24:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:37.943 17:24:47 -- host/auth.sh@68 -- # digest=sha512 00:20:37.943 17:24:47 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:37.943 17:24:47 -- host/auth.sh@68 -- # keyid=1 00:20:37.943 17:24:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:37.943 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.943 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:20:37.943 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.943 17:24:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:37.943 17:24:47 -- nvmf/common.sh@717 -- # local ip 00:20:37.943 17:24:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:37.943 17:24:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:37.943 17:24:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.943 17:24:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.943 17:24:47 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:37.943 17:24:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:37.943 17:24:47 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:37.943 17:24:47 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:37.943 17:24:47 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:37.943 17:24:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:37.943 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.943 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:20:38.512 nvme0n1 00:20:38.512 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.512 17:24:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.512 17:24:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:38.512 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.512 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:20:38.512 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.512 17:24:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.512 17:24:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.512 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.512 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:20:38.512 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.512 17:24:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:38.512 17:24:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:38.512 17:24:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:38.512 17:24:47 -- host/auth.sh@44 -- # digest=sha512 00:20:38.512 17:24:47 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:38.512 17:24:47 -- host/auth.sh@44 -- # keyid=2 00:20:38.512 17:24:47 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:38.512 17:24:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:38.512 17:24:47 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:38.512 17:24:47 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:38.512 17:24:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:20:38.512 17:24:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:38.512 17:24:47 -- host/auth.sh@68 -- # digest=sha512 00:20:38.512 17:24:47 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:38.512 17:24:47 -- host/auth.sh@68 -- # keyid=2 00:20:38.512 17:24:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:38.512 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.512 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:20:38.512 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.512 17:24:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:38.512 17:24:47 -- nvmf/common.sh@717 -- # local ip 00:20:38.512 17:24:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:38.512 17:24:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:38.512 17:24:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.512 17:24:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.512 17:24:47 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:38.512 17:24:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:38.512 17:24:47 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:38.512 17:24:47 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:38.512 17:24:47 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:38.512 17:24:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:38.512 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.512 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:20:38.772 nvme0n1 00:20:38.772 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.772 17:24:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.772 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.772 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:20:38.772 17:24:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:38.772 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.772 17:24:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.772 17:24:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.772 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.772 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:20:38.772 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.772 17:24:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:38.772 17:24:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:38.772 17:24:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:38.772 17:24:47 -- host/auth.sh@44 -- # digest=sha512 00:20:38.772 17:24:47 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:38.772 17:24:47 -- host/auth.sh@44 -- # keyid=3 00:20:38.772 17:24:47 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:38.772 17:24:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:38.772 17:24:47 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:38.772 17:24:47 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:38.772 17:24:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:20:38.772 17:24:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:38.772 17:24:47 -- host/auth.sh@68 -- # digest=sha512 00:20:38.772 17:24:47 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:38.772 17:24:47 -- host/auth.sh@68 -- # keyid=3 00:20:38.772 17:24:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:38.772 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.772 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:20:38.772 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.772 17:24:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:38.772 17:24:47 -- nvmf/common.sh@717 -- # local ip 00:20:38.772 17:24:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:38.772 17:24:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:38.772 17:24:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.772 17:24:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.772 17:24:47 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:38.772 17:24:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:38.772 17:24:47 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:38.772 17:24:47 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:38.772 17:24:47 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:38.772 17:24:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:38.772 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.772 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:20:39.031 nvme0n1 00:20:39.031 17:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.031 17:24:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.031 17:24:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:39.031 17:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.031 17:24:48 -- common/autotest_common.sh@10 -- # set +x 00:20:39.031 17:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.031 17:24:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.031 17:24:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.031 17:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.031 17:24:48 -- common/autotest_common.sh@10 -- # set +x 00:20:39.290 17:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.290 17:24:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:39.290 17:24:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:39.290 17:24:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:39.290 17:24:48 -- host/auth.sh@44 -- # digest=sha512 00:20:39.290 17:24:48 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:39.290 17:24:48 -- host/auth.sh@44 -- # keyid=4 00:20:39.290 17:24:48 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:39.290 17:24:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:39.290 17:24:48 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:39.290 17:24:48 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:39.290 17:24:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:20:39.290 17:24:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:39.290 17:24:48 -- host/auth.sh@68 -- # digest=sha512 00:20:39.290 17:24:48 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:39.290 17:24:48 -- host/auth.sh@68 -- # keyid=4 00:20:39.290 17:24:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:39.290 17:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.290 17:24:48 -- common/autotest_common.sh@10 -- # set +x 00:20:39.290 17:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.290 17:24:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:39.290 17:24:48 -- nvmf/common.sh@717 -- # local ip 00:20:39.290 17:24:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:39.290 17:24:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:39.290 17:24:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.290 17:24:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.290 17:24:48 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:39.290 17:24:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:39.290 17:24:48 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:39.290 17:24:48 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:39.290 17:24:48 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:39.290 17:24:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:39.290 17:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.290 17:24:48 -- common/autotest_common.sh@10 -- # set +x 00:20:39.548 nvme0n1 00:20:39.548 17:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.548 17:24:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.548 17:24:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:39.548 17:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.548 17:24:48 -- common/autotest_common.sh@10 -- # set +x 00:20:39.548 17:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.548 17:24:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.548 17:24:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.548 17:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.548 17:24:48 -- common/autotest_common.sh@10 -- # set +x 00:20:39.548 17:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.548 17:24:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.548 17:24:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:39.548 17:24:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:39.548 17:24:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:39.548 17:24:48 -- host/auth.sh@44 -- # digest=sha512 00:20:39.548 17:24:48 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:39.548 17:24:48 -- host/auth.sh@44 -- # keyid=0 00:20:39.548 17:24:48 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:39.548 17:24:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:39.548 17:24:48 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:39.548 17:24:48 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:39.548 17:24:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:20:39.548 17:24:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:39.548 17:24:48 -- host/auth.sh@68 -- # digest=sha512 00:20:39.548 17:24:48 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:39.548 17:24:48 -- host/auth.sh@68 -- # keyid=0 00:20:39.548 17:24:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:39.548 17:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.548 17:24:48 -- common/autotest_common.sh@10 -- # set +x 00:20:39.548 17:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.548 17:24:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:39.548 17:24:48 -- nvmf/common.sh@717 -- # local ip 00:20:39.548 17:24:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:39.548 17:24:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:39.548 17:24:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.548 17:24:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.548 17:24:48 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:39.548 17:24:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:39.548 17:24:48 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:39.548 17:24:48 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:39.548 17:24:48 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:39.548 17:24:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:39.548 17:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.548 17:24:48 -- common/autotest_common.sh@10 -- # set +x 00:20:40.116 nvme0n1 00:20:40.116 17:24:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.116 17:24:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.116 17:24:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.116 17:24:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:40.116 17:24:49 -- common/autotest_common.sh@10 -- # set +x 00:20:40.116 17:24:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.116 17:24:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.116 17:24:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.116 17:24:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.116 17:24:49 -- common/autotest_common.sh@10 -- # set +x 00:20:40.116 17:24:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.116 17:24:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:40.116 17:24:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:40.116 17:24:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:40.116 17:24:49 -- host/auth.sh@44 -- # digest=sha512 00:20:40.116 17:24:49 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:40.116 17:24:49 -- host/auth.sh@44 -- # keyid=1 00:20:40.116 17:24:49 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:40.116 17:24:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:40.116 17:24:49 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:40.116 17:24:49 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:40.116 17:24:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:20:40.116 17:24:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:40.116 17:24:49 -- host/auth.sh@68 -- # digest=sha512 00:20:40.116 17:24:49 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:40.116 17:24:49 -- host/auth.sh@68 -- # keyid=1 00:20:40.116 17:24:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:40.116 17:24:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.116 17:24:49 -- common/autotest_common.sh@10 -- # set +x 00:20:40.116 17:24:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.116 17:24:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:40.116 17:24:49 -- nvmf/common.sh@717 -- # local ip 00:20:40.116 17:24:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:40.116 17:24:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:40.116 17:24:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.116 17:24:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.116 17:24:49 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:40.116 17:24:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:40.116 17:24:49 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:40.116 17:24:49 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:40.116 17:24:49 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:40.116 17:24:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:40.116 17:24:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.116 17:24:49 -- common/autotest_common.sh@10 -- # set +x 00:20:40.374 nvme0n1 00:20:40.374 17:24:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.374 17:24:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.374 17:24:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:40.374 17:24:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.374 17:24:49 -- common/autotest_common.sh@10 -- # set +x 00:20:40.632 17:24:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.632 17:24:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.632 17:24:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.632 17:24:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.632 17:24:49 -- common/autotest_common.sh@10 -- # set +x 00:20:40.632 17:24:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.632 17:24:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:40.632 17:24:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:40.632 17:24:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:40.632 17:24:49 -- host/auth.sh@44 -- # digest=sha512 00:20:40.632 17:24:49 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:40.632 17:24:49 -- host/auth.sh@44 -- # keyid=2 00:20:40.632 17:24:49 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:40.632 17:24:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:40.632 17:24:49 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:40.632 17:24:49 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:40.632 17:24:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:20:40.632 17:24:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:40.632 17:24:49 -- host/auth.sh@68 -- # digest=sha512 00:20:40.632 17:24:49 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:40.632 17:24:49 -- host/auth.sh@68 -- # keyid=2 00:20:40.632 17:24:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:40.633 17:24:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.633 17:24:49 -- common/autotest_common.sh@10 -- # set +x 00:20:40.633 17:24:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.633 17:24:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:40.633 17:24:49 -- nvmf/common.sh@717 -- # local ip 00:20:40.633 17:24:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:40.633 17:24:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:40.633 17:24:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.633 17:24:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.633 17:24:49 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:40.633 17:24:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:40.633 17:24:49 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:40.633 17:24:49 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:40.633 17:24:49 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:40.633 17:24:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:40.633 17:24:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.633 17:24:49 -- common/autotest_common.sh@10 -- # set +x 00:20:40.891 nvme0n1 00:20:40.891 17:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.891 17:24:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.891 17:24:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:40.891 17:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.891 17:24:50 -- common/autotest_common.sh@10 -- # set +x 00:20:40.891 17:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.149 17:24:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.149 17:24:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.149 17:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.149 17:24:50 -- common/autotest_common.sh@10 -- # set +x 00:20:41.149 17:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.149 17:24:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:41.149 17:24:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:41.149 17:24:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:41.149 17:24:50 -- host/auth.sh@44 -- # digest=sha512 00:20:41.149 17:24:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:41.149 17:24:50 -- host/auth.sh@44 -- # keyid=3 00:20:41.149 17:24:50 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:41.149 17:24:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:41.149 17:24:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:41.149 17:24:50 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:41.149 17:24:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:20:41.149 17:24:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:41.149 17:24:50 -- host/auth.sh@68 -- # digest=sha512 00:20:41.149 17:24:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:41.149 17:24:50 -- host/auth.sh@68 -- # keyid=3 00:20:41.149 17:24:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:41.149 17:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.149 17:24:50 -- common/autotest_common.sh@10 -- # set +x 00:20:41.149 17:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.149 17:24:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:41.149 17:24:50 -- nvmf/common.sh@717 -- # local ip 00:20:41.149 17:24:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:41.149 17:24:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:41.149 17:24:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.149 17:24:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.149 17:24:50 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:41.149 17:24:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:41.149 17:24:50 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:41.149 17:24:50 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:41.149 17:24:50 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:41.149 17:24:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:41.149 17:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.149 17:24:50 -- common/autotest_common.sh@10 -- # set +x 00:20:41.408 nvme0n1 00:20:41.408 17:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.408 17:24:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.408 17:24:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:41.408 17:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.408 17:24:50 -- common/autotest_common.sh@10 -- # set +x 00:20:41.408 17:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.667 17:24:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.667 17:24:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.667 17:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.667 17:24:50 -- common/autotest_common.sh@10 -- # set +x 00:20:41.667 17:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.667 17:24:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:41.667 17:24:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:41.667 17:24:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:41.667 17:24:50 -- host/auth.sh@44 -- # digest=sha512 00:20:41.667 17:24:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:41.667 17:24:50 -- host/auth.sh@44 -- # keyid=4 00:20:41.667 17:24:50 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:41.667 17:24:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:41.667 17:24:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:41.667 17:24:50 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:41.667 17:24:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:20:41.667 17:24:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:41.667 17:24:50 -- host/auth.sh@68 -- # digest=sha512 00:20:41.667 17:24:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:41.667 17:24:50 -- host/auth.sh@68 -- # keyid=4 00:20:41.667 17:24:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:41.667 17:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.667 17:24:50 -- common/autotest_common.sh@10 -- # set +x 00:20:41.667 17:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.667 17:24:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:41.667 17:24:50 -- nvmf/common.sh@717 -- # local ip 00:20:41.667 17:24:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:41.667 17:24:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:41.667 17:24:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.667 17:24:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.667 17:24:50 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:41.667 17:24:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:41.667 17:24:50 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:41.667 17:24:50 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:41.667 17:24:50 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:41.667 17:24:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:41.667 17:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.667 17:24:50 -- common/autotest_common.sh@10 -- # set +x 00:20:41.925 nvme0n1 00:20:41.925 17:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.925 17:24:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.925 17:24:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:41.925 17:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.925 17:24:51 -- common/autotest_common.sh@10 -- # set +x 00:20:41.925 17:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.183 17:24:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.183 17:24:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.183 17:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.183 17:24:51 -- common/autotest_common.sh@10 -- # set +x 00:20:42.183 17:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.183 17:24:51 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.183 17:24:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:42.183 17:24:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:42.183 17:24:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:42.183 17:24:51 -- host/auth.sh@44 -- # digest=sha512 00:20:42.183 17:24:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.183 17:24:51 -- host/auth.sh@44 -- # keyid=0 00:20:42.183 17:24:51 -- host/auth.sh@45 -- # key=DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:42.183 17:24:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:42.183 17:24:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:42.183 17:24:51 -- host/auth.sh@49 -- # echo DHHC-1:00:M2EzYzU5NDY1Y2VhZGY4OGI3OTNjYWJiMmM5MjcwMjF5ns9H: 00:20:42.183 17:24:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:20:42.183 17:24:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:42.183 17:24:51 -- host/auth.sh@68 -- # digest=sha512 00:20:42.183 17:24:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:42.183 17:24:51 -- host/auth.sh@68 -- # keyid=0 00:20:42.183 17:24:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:42.183 17:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.183 17:24:51 -- common/autotest_common.sh@10 -- # set +x 00:20:42.183 17:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.183 17:24:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:42.183 17:24:51 -- nvmf/common.sh@717 -- # local ip 00:20:42.183 17:24:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:42.184 17:24:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:42.184 17:24:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.184 17:24:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.184 17:24:51 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:42.184 17:24:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:42.184 17:24:51 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:42.184 17:24:51 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:42.184 17:24:51 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:42.184 17:24:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:42.184 17:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.184 17:24:51 -- common/autotest_common.sh@10 -- # set +x 00:20:42.749 nvme0n1 00:20:42.749 17:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.749 17:24:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:42.749 17:24:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.749 17:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.749 17:24:51 -- common/autotest_common.sh@10 -- # set +x 00:20:42.749 17:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.749 17:24:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.749 17:24:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.749 17:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.749 17:24:51 -- common/autotest_common.sh@10 -- # set +x 00:20:42.749 17:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.749 17:24:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:42.749 17:24:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:42.749 17:24:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:42.749 17:24:51 -- host/auth.sh@44 -- # digest=sha512 00:20:42.749 17:24:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.749 17:24:51 -- host/auth.sh@44 -- # keyid=1 00:20:42.749 17:24:51 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:42.750 17:24:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:42.750 17:24:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:42.750 17:24:51 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:42.750 17:24:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:20:42.750 17:24:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:42.750 17:24:51 -- host/auth.sh@68 -- # digest=sha512 00:20:42.750 17:24:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:42.750 17:24:51 -- host/auth.sh@68 -- # keyid=1 00:20:42.750 17:24:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:42.750 17:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.750 17:24:51 -- common/autotest_common.sh@10 -- # set +x 00:20:42.750 17:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.750 17:24:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:42.750 17:24:51 -- nvmf/common.sh@717 -- # local ip 00:20:42.750 17:24:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:42.750 17:24:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:42.750 17:24:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.750 17:24:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.750 17:24:51 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:42.750 17:24:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:42.750 17:24:51 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:42.750 17:24:51 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:42.750 17:24:51 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:42.750 17:24:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:42.750 17:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.750 17:24:51 -- common/autotest_common.sh@10 -- # set +x 00:20:43.315 nvme0n1 00:20:43.315 17:24:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.315 17:24:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.315 17:24:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.315 17:24:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:43.315 17:24:52 -- common/autotest_common.sh@10 -- # set +x 00:20:43.315 17:24:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.315 17:24:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.315 17:24:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.315 17:24:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.315 17:24:52 -- common/autotest_common.sh@10 -- # set +x 00:20:43.573 17:24:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.573 17:24:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:43.573 17:24:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:43.573 17:24:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:43.573 17:24:52 -- host/auth.sh@44 -- # digest=sha512 00:20:43.573 17:24:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:43.573 17:24:52 -- host/auth.sh@44 -- # keyid=2 00:20:43.573 17:24:52 -- host/auth.sh@45 -- # key=DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:43.573 17:24:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:43.573 17:24:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:43.573 17:24:52 -- host/auth.sh@49 -- # echo DHHC-1:01:ODE2ZTQ1N2Y2ZDFiZmFiNGE3MjUwMGM4MGEwZDI2NTVrVT6Y: 00:20:43.573 17:24:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:20:43.573 17:24:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:43.573 17:24:52 -- host/auth.sh@68 -- # digest=sha512 00:20:43.573 17:24:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:43.573 17:24:52 -- host/auth.sh@68 -- # keyid=2 00:20:43.573 17:24:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:43.573 17:24:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.573 17:24:52 -- common/autotest_common.sh@10 -- # set +x 00:20:43.573 17:24:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.573 17:24:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:43.573 17:24:52 -- nvmf/common.sh@717 -- # local ip 00:20:43.573 17:24:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:43.573 17:24:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:43.573 17:24:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.573 17:24:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.573 17:24:52 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:43.573 17:24:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:43.573 17:24:52 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:43.573 17:24:52 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:43.573 17:24:52 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:43.573 17:24:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:43.573 17:24:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.573 17:24:52 -- common/autotest_common.sh@10 -- # set +x 00:20:44.139 nvme0n1 00:20:44.139 17:24:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.139 17:24:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:44.139 17:24:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.139 17:24:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.139 17:24:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.139 17:24:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.139 17:24:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.139 17:24:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.139 17:24:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.139 17:24:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.139 17:24:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.139 17:24:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:44.139 17:24:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:44.139 17:24:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:44.139 17:24:53 -- host/auth.sh@44 -- # digest=sha512 00:20:44.139 17:24:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:44.139 17:24:53 -- host/auth.sh@44 -- # keyid=3 00:20:44.139 17:24:53 -- host/auth.sh@45 -- # key=DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:44.139 17:24:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:44.139 17:24:53 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:44.139 17:24:53 -- host/auth.sh@49 -- # echo DHHC-1:02:OTRkNmZkNmY5MjU5YzlhMjY4N2NjZTRmNzViYzU2MTZjODI0MDZiNmJhZDVlZGM2i4WWqA==: 00:20:44.139 17:24:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:20:44.139 17:24:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:44.139 17:24:53 -- host/auth.sh@68 -- # digest=sha512 00:20:44.139 17:24:53 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:44.139 17:24:53 -- host/auth.sh@68 -- # keyid=3 00:20:44.139 17:24:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:44.139 17:24:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.139 17:24:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.139 17:24:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.139 17:24:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:44.139 17:24:53 -- nvmf/common.sh@717 -- # local ip 00:20:44.139 17:24:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:44.139 17:24:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:44.139 17:24:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.139 17:24:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.139 17:24:53 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:44.139 17:24:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:44.139 17:24:53 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:44.139 17:24:53 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:44.139 17:24:53 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:44.139 17:24:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:44.139 17:24:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.139 17:24:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.705 nvme0n1 00:20:44.705 17:24:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.705 17:24:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.705 17:24:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.705 17:24:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:44.705 17:24:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.705 17:24:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.964 17:24:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.964 17:24:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.964 17:24:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.964 17:24:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.964 17:24:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.964 17:24:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:44.964 17:24:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:44.964 17:24:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:44.964 17:24:53 -- host/auth.sh@44 -- # digest=sha512 00:20:44.964 17:24:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:44.964 17:24:53 -- host/auth.sh@44 -- # keyid=4 00:20:44.964 17:24:53 -- host/auth.sh@45 -- # key=DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:44.964 17:24:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:44.964 17:24:53 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:44.964 17:24:53 -- host/auth.sh@49 -- # echo DHHC-1:03:ZmNhMmNlNGY2OWRmYjc5OTZmMzdhMWY4OTA1ZWQxYjVhOWM5ZjViZjAxOGExMjMyODc4NWEzODMxNWIyZDI0NhNL01Y=: 00:20:44.964 17:24:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:20:44.964 17:24:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:44.964 17:24:53 -- host/auth.sh@68 -- # digest=sha512 00:20:44.964 17:24:53 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:44.964 17:24:53 -- host/auth.sh@68 -- # keyid=4 00:20:44.964 17:24:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:44.964 17:24:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.964 17:24:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.964 17:24:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.964 17:24:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:44.964 17:24:53 -- nvmf/common.sh@717 -- # local ip 00:20:44.964 17:24:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:44.964 17:24:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:44.964 17:24:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.964 17:24:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.964 17:24:54 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:44.964 17:24:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:44.964 17:24:54 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:44.964 17:24:54 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:44.964 17:24:54 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:44.964 17:24:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:44.964 17:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.964 17:24:54 -- common/autotest_common.sh@10 -- # set +x 00:20:45.531 nvme0n1 00:20:45.531 17:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.531 17:24:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.531 17:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.531 17:24:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:45.531 17:24:54 -- common/autotest_common.sh@10 -- # set +x 00:20:45.531 17:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.531 17:24:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.531 17:24:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.531 17:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.531 17:24:54 -- common/autotest_common.sh@10 -- # set +x 00:20:45.531 17:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.531 17:24:54 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:45.531 17:24:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:45.531 17:24:54 -- host/auth.sh@44 -- # digest=sha256 00:20:45.531 17:24:54 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:45.531 17:24:54 -- host/auth.sh@44 -- # keyid=1 00:20:45.531 17:24:54 -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:45.531 17:24:54 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:45.531 17:24:54 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:45.531 17:24:54 -- host/auth.sh@49 -- # echo DHHC-1:00:OWI1NjA2NDA4Y2MwZmI4ZWZjODA4NGFjOGM3MzQ0Y2M2MzljNjE3Y2ZiZjNiOTdk2VkmgA==: 00:20:45.531 17:24:54 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:45.531 17:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.531 17:24:54 -- common/autotest_common.sh@10 -- # set +x 00:20:45.531 17:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.531 17:24:54 -- host/auth.sh@119 -- # get_main_ns_ip 00:20:45.531 17:24:54 -- nvmf/common.sh@717 -- # local ip 00:20:45.531 17:24:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:45.531 17:24:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:45.531 17:24:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.531 17:24:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.531 17:24:54 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:45.531 17:24:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:45.531 17:24:54 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:45.531 17:24:54 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:45.531 17:24:54 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:45.531 17:24:54 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:45.531 17:24:54 -- common/autotest_common.sh@638 -- # local es=0 00:20:45.531 17:24:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:45.531 17:24:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:45.531 17:24:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:45.531 17:24:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:45.531 17:24:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:45.531 17:24:54 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:45.531 17:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.531 17:24:54 -- common/autotest_common.sh@10 -- # set +x 00:20:45.531 request: 00:20:45.531 { 00:20:45.531 "name": "nvme0", 00:20:45.531 "trtype": "rdma", 00:20:45.531 "traddr": "192.168.100.8", 00:20:45.531 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:45.531 "adrfam": "ipv4", 00:20:45.531 "trsvcid": "4420", 00:20:45.531 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:45.531 "method": "bdev_nvme_attach_controller", 00:20:45.531 "req_id": 1 00:20:45.531 } 00:20:45.531 Got JSON-RPC error response 00:20:45.531 response: 00:20:45.531 { 00:20:45.531 "code": -32602, 00:20:45.531 "message": "Invalid parameters" 00:20:45.531 } 00:20:45.789 17:24:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:45.789 17:24:54 -- common/autotest_common.sh@641 -- # es=1 00:20:45.789 17:24:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:45.789 17:24:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:45.789 17:24:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:45.789 17:24:54 -- host/auth.sh@121 -- # jq length 00:20:45.789 17:24:54 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.789 17:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.789 17:24:54 -- common/autotest_common.sh@10 -- # set +x 00:20:45.789 17:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.789 17:24:54 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:20:45.789 17:24:54 -- host/auth.sh@124 -- # get_main_ns_ip 00:20:45.789 17:24:54 -- nvmf/common.sh@717 -- # local ip 00:20:45.789 17:24:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:45.789 17:24:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:45.789 17:24:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.789 17:24:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.789 17:24:54 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:45.789 17:24:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:45.789 17:24:54 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:45.789 17:24:54 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:45.789 17:24:54 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:45.789 17:24:54 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:45.789 17:24:54 -- common/autotest_common.sh@638 -- # local es=0 00:20:45.789 17:24:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:45.789 17:24:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:45.789 17:24:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:45.789 17:24:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:45.789 17:24:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:45.789 17:24:54 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:45.789 17:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.789 17:24:54 -- common/autotest_common.sh@10 -- # set +x 00:20:45.790 request: 00:20:45.790 { 00:20:45.790 "name": "nvme0", 00:20:45.790 "trtype": "rdma", 00:20:45.790 "traddr": "192.168.100.8", 00:20:45.790 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:45.790 "adrfam": "ipv4", 00:20:45.790 "trsvcid": "4420", 00:20:45.790 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:45.790 "dhchap_key": "key2", 00:20:45.790 "method": "bdev_nvme_attach_controller", 00:20:45.790 "req_id": 1 00:20:45.790 } 00:20:45.790 Got JSON-RPC error response 00:20:45.790 response: 00:20:45.790 { 00:20:45.790 "code": -32602, 00:20:45.790 "message": "Invalid parameters" 00:20:45.790 } 00:20:45.790 17:24:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:45.790 17:24:54 -- common/autotest_common.sh@641 -- # es=1 00:20:45.790 17:24:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:45.790 17:24:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:45.790 17:24:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:45.790 17:24:54 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.790 17:24:54 -- host/auth.sh@127 -- # jq length 00:20:45.790 17:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.790 17:24:54 -- common/autotest_common.sh@10 -- # set +x 00:20:45.790 17:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.790 17:24:54 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:20:45.790 17:24:54 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:20:45.790 17:24:54 -- host/auth.sh@130 -- # cleanup 00:20:45.790 17:24:54 -- host/auth.sh@24 -- # nvmftestfini 00:20:45.790 17:24:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:45.790 17:24:54 -- nvmf/common.sh@117 -- # sync 00:20:45.790 17:24:54 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:45.790 17:24:54 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:45.790 17:24:54 -- nvmf/common.sh@120 -- # set +e 00:20:45.790 17:24:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:45.790 17:24:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:45.790 rmmod nvme_rdma 00:20:45.790 rmmod nvme_fabrics 00:20:45.790 17:24:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:45.790 17:24:55 -- nvmf/common.sh@124 -- # set -e 00:20:45.790 17:24:55 -- nvmf/common.sh@125 -- # return 0 00:20:45.790 17:24:55 -- nvmf/common.sh@478 -- # '[' -n 3052265 ']' 00:20:45.790 17:24:55 -- nvmf/common.sh@479 -- # killprocess 3052265 00:20:45.790 17:24:55 -- common/autotest_common.sh@936 -- # '[' -z 3052265 ']' 00:20:45.790 17:24:55 -- common/autotest_common.sh@940 -- # kill -0 3052265 00:20:45.790 17:24:55 -- common/autotest_common.sh@941 -- # uname 00:20:45.790 17:24:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:46.049 17:24:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3052265 00:20:46.049 17:24:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:46.049 17:24:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:46.049 17:24:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3052265' 00:20:46.049 killing process with pid 3052265 00:20:46.049 17:24:55 -- common/autotest_common.sh@955 -- # kill 3052265 00:20:46.049 17:24:55 -- common/autotest_common.sh@960 -- # wait 3052265 00:20:46.049 17:24:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:46.049 17:24:55 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:46.049 17:24:55 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:46.049 17:24:55 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:46.049 17:24:55 -- host/auth.sh@27 -- # clean_kernel_target 00:20:46.049 17:24:55 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:46.049 17:24:55 -- nvmf/common.sh@675 -- # echo 0 00:20:46.049 17:24:55 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:46.049 17:24:55 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:46.049 17:24:55 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:46.049 17:24:55 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:46.049 17:24:55 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:20:46.049 17:24:55 -- nvmf/common.sh@684 -- # modprobe -r nvmet_rdma nvmet 00:20:46.354 17:24:55 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:20:48.960 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:48.960 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:49.897 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:20:50.177 17:24:59 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.a0E /tmp/spdk.key-null.fh1 /tmp/spdk.key-sha256.OCN /tmp/spdk.key-sha384.tiH /tmp/spdk.key-sha512.69I /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:20:50.177 17:24:59 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:20:52.704 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:20:52.704 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:20:52.704 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:20:52.704 00:20:52.704 real 0m51.667s 00:20:52.704 user 0m47.241s 00:20:52.704 sys 0m11.566s 00:20:52.704 17:25:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:52.704 17:25:01 -- common/autotest_common.sh@10 -- # set +x 00:20:52.704 ************************************ 00:20:52.704 END TEST nvmf_auth 00:20:52.704 ************************************ 00:20:52.704 17:25:01 -- nvmf/nvmf.sh@104 -- # [[ rdma == \t\c\p ]] 00:20:52.704 17:25:01 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:20:52.704 17:25:01 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:20:52.704 17:25:01 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:20:52.704 17:25:01 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:20:52.704 17:25:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:52.704 17:25:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:52.704 17:25:01 -- common/autotest_common.sh@10 -- # set +x 00:20:52.704 ************************************ 00:20:52.704 START TEST nvmf_bdevperf 00:20:52.704 ************************************ 00:20:52.704 17:25:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:20:52.705 * Looking for test storage... 00:20:52.705 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:52.705 17:25:01 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:52.705 17:25:01 -- nvmf/common.sh@7 -- # uname -s 00:20:52.705 17:25:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.705 17:25:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.705 17:25:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.705 17:25:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.705 17:25:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.705 17:25:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.705 17:25:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.705 17:25:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.705 17:25:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.705 17:25:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.705 17:25:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:52.705 17:25:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:20:52.705 17:25:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.705 17:25:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.705 17:25:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:52.705 17:25:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.705 17:25:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:52.705 17:25:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.705 17:25:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.705 17:25:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.705 17:25:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.705 17:25:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.705 17:25:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.705 17:25:01 -- paths/export.sh@5 -- # export PATH 00:20:52.705 17:25:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.705 17:25:01 -- nvmf/common.sh@47 -- # : 0 00:20:52.705 17:25:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:52.705 17:25:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:52.705 17:25:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.705 17:25:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.705 17:25:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.705 17:25:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:52.705 17:25:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:52.705 17:25:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:52.705 17:25:01 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:52.705 17:25:01 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:52.705 17:25:01 -- host/bdevperf.sh@24 -- # nvmftestinit 00:20:52.705 17:25:01 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:52.705 17:25:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.705 17:25:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:52.705 17:25:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:52.705 17:25:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:52.705 17:25:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.705 17:25:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.705 17:25:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.705 17:25:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:52.705 17:25:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:52.705 17:25:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:52.705 17:25:01 -- common/autotest_common.sh@10 -- # set +x 00:20:57.973 17:25:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:57.973 17:25:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:57.973 17:25:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:57.973 17:25:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:57.973 17:25:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:57.973 17:25:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:57.973 17:25:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:57.973 17:25:06 -- nvmf/common.sh@295 -- # net_devs=() 00:20:57.973 17:25:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:57.973 17:25:06 -- nvmf/common.sh@296 -- # e810=() 00:20:57.973 17:25:06 -- nvmf/common.sh@296 -- # local -ga e810 00:20:57.973 17:25:06 -- nvmf/common.sh@297 -- # x722=() 00:20:57.973 17:25:06 -- nvmf/common.sh@297 -- # local -ga x722 00:20:57.973 17:25:06 -- nvmf/common.sh@298 -- # mlx=() 00:20:57.973 17:25:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:57.973 17:25:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.973 17:25:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.973 17:25:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.973 17:25:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.973 17:25:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.973 17:25:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.973 17:25:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.973 17:25:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.973 17:25:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.973 17:25:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.973 17:25:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.973 17:25:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:57.973 17:25:06 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:57.973 17:25:06 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:57.973 17:25:06 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:57.973 17:25:06 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:57.973 17:25:06 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:57.973 17:25:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:57.973 17:25:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:57.973 17:25:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:57.973 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:57.973 17:25:06 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:57.973 17:25:06 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:57.973 17:25:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:57.973 17:25:06 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:57.973 17:25:06 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:57.973 17:25:06 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:57.973 17:25:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:57.973 17:25:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:57.973 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:57.973 17:25:06 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:57.973 17:25:06 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:57.973 17:25:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:57.973 17:25:06 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:57.973 17:25:06 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:57.973 17:25:06 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:57.973 17:25:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:57.973 17:25:06 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:57.973 17:25:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:57.973 17:25:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.973 17:25:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:57.973 17:25:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.973 17:25:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:57.973 Found net devices under 0000:da:00.0: mlx_0_0 00:20:57.973 17:25:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.973 17:25:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:57.973 17:25:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.973 17:25:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:57.974 17:25:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.974 17:25:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:57.974 Found net devices under 0000:da:00.1: mlx_0_1 00:20:57.974 17:25:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.974 17:25:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:57.974 17:25:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:57.974 17:25:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:57.974 17:25:06 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:20:57.974 17:25:06 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:20:57.974 17:25:06 -- nvmf/common.sh@409 -- # rdma_device_init 00:20:57.974 17:25:06 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:20:57.974 17:25:06 -- nvmf/common.sh@58 -- # uname 00:20:57.974 17:25:06 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:57.974 17:25:06 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:57.974 17:25:06 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:57.974 17:25:06 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:57.974 17:25:06 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:57.974 17:25:06 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:57.974 17:25:06 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:57.974 17:25:06 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:57.974 17:25:06 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:20:57.974 17:25:06 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:57.974 17:25:06 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:57.974 17:25:06 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:57.974 17:25:06 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:57.974 17:25:06 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:57.974 17:25:06 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:57.974 17:25:06 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:57.974 17:25:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:57.974 17:25:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.974 17:25:06 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:57.974 17:25:06 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:57.974 17:25:06 -- nvmf/common.sh@105 -- # continue 2 00:20:57.974 17:25:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:57.974 17:25:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.974 17:25:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:57.974 17:25:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.974 17:25:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:57.974 17:25:06 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:57.974 17:25:06 -- nvmf/common.sh@105 -- # continue 2 00:20:57.974 17:25:06 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:57.974 17:25:06 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:57.974 17:25:06 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:57.974 17:25:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:57.974 17:25:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:57.974 17:25:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:57.974 17:25:06 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:57.974 17:25:06 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:57.974 17:25:06 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:57.974 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:57.974 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:20:57.974 altname enp218s0f0np0 00:20:57.974 altname ens818f0np0 00:20:57.974 inet 192.168.100.8/24 scope global mlx_0_0 00:20:57.974 valid_lft forever preferred_lft forever 00:20:57.974 17:25:06 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:57.974 17:25:06 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:57.974 17:25:06 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:57.974 17:25:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:57.974 17:25:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:57.974 17:25:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:57.974 17:25:06 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:57.974 17:25:06 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:57.974 17:25:06 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:57.974 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:57.974 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:20:57.974 altname enp218s0f1np1 00:20:57.974 altname ens818f1np1 00:20:57.974 inet 192.168.100.9/24 scope global mlx_0_1 00:20:57.974 valid_lft forever preferred_lft forever 00:20:57.974 17:25:06 -- nvmf/common.sh@411 -- # return 0 00:20:57.974 17:25:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:57.974 17:25:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:57.974 17:25:06 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:20:57.974 17:25:06 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:20:57.974 17:25:06 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:57.974 17:25:06 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:57.974 17:25:06 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:57.974 17:25:06 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:57.974 17:25:06 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:57.974 17:25:06 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:57.974 17:25:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:57.974 17:25:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.974 17:25:06 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:57.974 17:25:06 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:57.974 17:25:06 -- nvmf/common.sh@105 -- # continue 2 00:20:57.974 17:25:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:57.974 17:25:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.974 17:25:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:57.974 17:25:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.974 17:25:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:57.974 17:25:06 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:57.974 17:25:06 -- nvmf/common.sh@105 -- # continue 2 00:20:57.974 17:25:07 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:57.974 17:25:07 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:57.974 17:25:07 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:57.974 17:25:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:57.974 17:25:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:57.974 17:25:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:57.974 17:25:07 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:57.974 17:25:07 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:57.974 17:25:07 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:57.974 17:25:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:57.974 17:25:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:57.974 17:25:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:57.974 17:25:07 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:20:57.974 192.168.100.9' 00:20:57.974 17:25:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:57.974 192.168.100.9' 00:20:57.974 17:25:07 -- nvmf/common.sh@446 -- # head -n 1 00:20:57.974 17:25:07 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:57.974 17:25:07 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:20:57.974 192.168.100.9' 00:20:57.974 17:25:07 -- nvmf/common.sh@447 -- # tail -n +2 00:20:57.974 17:25:07 -- nvmf/common.sh@447 -- # head -n 1 00:20:57.974 17:25:07 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:57.974 17:25:07 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:20:57.974 17:25:07 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:57.974 17:25:07 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:20:57.974 17:25:07 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:20:57.974 17:25:07 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:20:57.974 17:25:07 -- host/bdevperf.sh@25 -- # tgt_init 00:20:57.974 17:25:07 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:57.974 17:25:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:57.974 17:25:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:57.974 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:20:57.974 17:25:07 -- nvmf/common.sh@470 -- # nvmfpid=3058031 00:20:57.974 17:25:07 -- nvmf/common.sh@471 -- # waitforlisten 3058031 00:20:57.974 17:25:07 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:57.974 17:25:07 -- common/autotest_common.sh@817 -- # '[' -z 3058031 ']' 00:20:57.974 17:25:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.974 17:25:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:57.974 17:25:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.974 17:25:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:57.974 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:20:57.974 [2024-04-24 17:25:07.111977] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:20:57.974 [2024-04-24 17:25:07.112032] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.974 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.974 [2024-04-24 17:25:07.168048] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:58.233 [2024-04-24 17:25:07.246028] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.233 [2024-04-24 17:25:07.246075] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.233 [2024-04-24 17:25:07.246082] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.233 [2024-04-24 17:25:07.246088] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.233 [2024-04-24 17:25:07.246093] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.234 [2024-04-24 17:25:07.246194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.234 [2024-04-24 17:25:07.246278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.234 [2024-04-24 17:25:07.246279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.801 17:25:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:58.801 17:25:07 -- common/autotest_common.sh@850 -- # return 0 00:20:58.801 17:25:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:58.801 17:25:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:58.801 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:20:58.801 17:25:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.801 17:25:07 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:58.801 17:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.801 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:20:58.801 [2024-04-24 17:25:07.967172] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbd4680/0xbd8b70) succeed. 00:20:58.801 [2024-04-24 17:25:07.977090] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbd5bd0/0xc1a200) succeed. 00:20:59.060 17:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.060 17:25:08 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:59.060 17:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.060 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:20:59.060 Malloc0 00:20:59.060 17:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.060 17:25:08 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:59.060 17:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.060 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:20:59.060 17:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.060 17:25:08 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:59.060 17:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.060 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:20:59.060 17:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.060 17:25:08 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:59.060 17:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.060 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:20:59.060 [2024-04-24 17:25:08.115250] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:59.060 17:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.060 17:25:08 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:20:59.060 17:25:08 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:20:59.060 17:25:08 -- nvmf/common.sh@521 -- # config=() 00:20:59.060 17:25:08 -- nvmf/common.sh@521 -- # local subsystem config 00:20:59.060 17:25:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:59.060 17:25:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:59.060 { 00:20:59.060 "params": { 00:20:59.060 "name": "Nvme$subsystem", 00:20:59.060 "trtype": "$TEST_TRANSPORT", 00:20:59.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.060 "adrfam": "ipv4", 00:20:59.060 "trsvcid": "$NVMF_PORT", 00:20:59.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.060 "hdgst": ${hdgst:-false}, 00:20:59.060 "ddgst": ${ddgst:-false} 00:20:59.060 }, 00:20:59.060 "method": "bdev_nvme_attach_controller" 00:20:59.060 } 00:20:59.060 EOF 00:20:59.060 )") 00:20:59.060 17:25:08 -- nvmf/common.sh@543 -- # cat 00:20:59.060 17:25:08 -- nvmf/common.sh@545 -- # jq . 00:20:59.060 17:25:08 -- nvmf/common.sh@546 -- # IFS=, 00:20:59.060 17:25:08 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:59.060 "params": { 00:20:59.060 "name": "Nvme1", 00:20:59.060 "trtype": "rdma", 00:20:59.060 "traddr": "192.168.100.8", 00:20:59.060 "adrfam": "ipv4", 00:20:59.060 "trsvcid": "4420", 00:20:59.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.060 "hdgst": false, 00:20:59.060 "ddgst": false 00:20:59.060 }, 00:20:59.060 "method": "bdev_nvme_attach_controller" 00:20:59.060 }' 00:20:59.060 [2024-04-24 17:25:08.161270] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:20:59.060 [2024-04-24 17:25:08.161312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058065 ] 00:20:59.060 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.060 [2024-04-24 17:25:08.215714] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.060 [2024-04-24 17:25:08.289609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.319 Running I/O for 1 seconds... 00:21:00.253 00:21:00.253 Latency(us) 00:21:00.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.253 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:00.253 Verification LBA range: start 0x0 length 0x4000 00:21:00.253 Nvme1n1 : 1.00 18215.35 71.15 0.00 0.00 6988.04 2543.42 12046.14 00:21:00.253 =================================================================================================================== 00:21:00.253 Total : 18215.35 71.15 0.00 0.00 6988.04 2543.42 12046.14 00:21:00.511 17:25:09 -- host/bdevperf.sh@30 -- # bdevperfpid=3058094 00:21:00.511 17:25:09 -- host/bdevperf.sh@32 -- # sleep 3 00:21:00.511 17:25:09 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:21:00.511 17:25:09 -- nvmf/common.sh@521 -- # config=() 00:21:00.511 17:25:09 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:21:00.511 17:25:09 -- nvmf/common.sh@521 -- # local subsystem config 00:21:00.511 17:25:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:00.511 17:25:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:00.511 { 00:21:00.511 "params": { 00:21:00.511 "name": "Nvme$subsystem", 00:21:00.511 "trtype": "$TEST_TRANSPORT", 00:21:00.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:00.511 "adrfam": "ipv4", 00:21:00.511 "trsvcid": "$NVMF_PORT", 00:21:00.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:00.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:00.511 "hdgst": ${hdgst:-false}, 00:21:00.511 "ddgst": ${ddgst:-false} 00:21:00.511 }, 00:21:00.511 "method": "bdev_nvme_attach_controller" 00:21:00.511 } 00:21:00.511 EOF 00:21:00.511 )") 00:21:00.511 17:25:09 -- nvmf/common.sh@543 -- # cat 00:21:00.511 17:25:09 -- nvmf/common.sh@545 -- # jq . 00:21:00.511 17:25:09 -- nvmf/common.sh@546 -- # IFS=, 00:21:00.511 17:25:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:00.511 "params": { 00:21:00.511 "name": "Nvme1", 00:21:00.511 "trtype": "rdma", 00:21:00.511 "traddr": "192.168.100.8", 00:21:00.511 "adrfam": "ipv4", 00:21:00.511 "trsvcid": "4420", 00:21:00.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.511 "hdgst": false, 00:21:00.511 "ddgst": false 00:21:00.511 }, 00:21:00.511 "method": "bdev_nvme_attach_controller" 00:21:00.511 }' 00:21:00.511 [2024-04-24 17:25:09.750486] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:21:00.511 [2024-04-24 17:25:09.750534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058094 ] 00:21:00.769 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.769 [2024-04-24 17:25:09.806193] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.769 [2024-04-24 17:25:09.874774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.026 Running I/O for 15 seconds... 00:21:03.553 17:25:12 -- host/bdevperf.sh@33 -- # kill -9 3058031 00:21:03.553 17:25:12 -- host/bdevperf.sh@35 -- # sleep 3 00:21:04.928 [2024-04-24 17:25:13.740118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x186f00 00:21:04.928 [2024-04-24 17:25:13.740155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.928 [2024-04-24 17:25:13.740173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x186f00 00:21:04.928 [2024-04-24 17:25:13.740181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.928 [2024-04-24 17:25:13.740190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x186f00 00:21:04.928 [2024-04-24 17:25:13.740197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.928 [2024-04-24 17:25:13.740205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.928 [2024-04-24 17:25:13.740211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.928 [2024-04-24 17:25:13.740219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.928 [2024-04-24 17:25:13.740225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.928 [2024-04-24 17:25:13.740234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.928 [2024-04-24 17:25:13.740240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.928 [2024-04-24 17:25:13.740248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.928 [2024-04-24 17:25:13.740254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.928 [2024-04-24 17:25:13.740262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.928 [2024-04-24 17:25:13.740268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.929 [2024-04-24 17:25:13.740742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.929 [2024-04-24 17:25:13.740750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.740988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.740996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.930 [2024-04-24 17:25:13.741285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.930 [2024-04-24 17:25:13.741293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.931 [2024-04-24 17:25:13.741851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.931 [2024-04-24 17:25:13.741859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.932 [2024-04-24 17:25:13.741865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.932 [2024-04-24 17:25:13.741873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.932 [2024-04-24 17:25:13.741879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.932 [2024-04-24 17:25:13.741887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.932 [2024-04-24 17:25:13.741893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.932 [2024-04-24 17:25:13.741900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.932 [2024-04-24 17:25:13.741910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.932 [2024-04-24 17:25:13.741918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.932 [2024-04-24 17:25:13.741924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.932 [2024-04-24 17:25:13.741932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.932 [2024-04-24 17:25:13.741938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.932 [2024-04-24 17:25:13.741946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.932 [2024-04-24 17:25:13.741952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.932 [2024-04-24 17:25:13.741959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.932 [2024-04-24 17:25:13.741965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.932 [2024-04-24 17:25:13.741973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.932 [2024-04-24 17:25:13.741979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.932 [2024-04-24 17:25:13.741987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:04.932 [2024-04-24 17:25:13.741993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5240 p:0 m:0 dnr:0 00:21:04.932 [2024-04-24 17:25:13.744126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:04.932 [2024-04-24 17:25:13.744158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:04.932 [2024-04-24 17:25:13.744178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130016 len:8 PRP1 0x0 PRP2 0x0 00:21:04.932 [2024-04-24 17:25:13.744200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.932 [2024-04-24 17:25:13.744268] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a00 was disconnected and freed. reset controller. 00:21:04.932 [2024-04-24 17:25:13.746990] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:04.932 [2024-04-24 17:25:13.761184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:04.932 [2024-04-24 17:25:13.764450] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:04.932 [2024-04-24 17:25:13.764467] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:04.932 [2024-04-24 17:25:13.764473] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:21:05.866 [2024-04-24 17:25:14.768452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:05.866 [2024-04-24 17:25:14.768501] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:05.866 [2024-04-24 17:25:14.769089] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:05.866 [2024-04-24 17:25:14.769122] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:05.866 [2024-04-24 17:25:14.769134] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:05.866 [2024-04-24 17:25:14.771638] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:05.866 [2024-04-24 17:25:14.775134] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:05.866 [2024-04-24 17:25:14.777623] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:05.866 [2024-04-24 17:25:14.777640] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:05.866 [2024-04-24 17:25:14.777645] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:21:06.801 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3058031 Killed "${NVMF_APP[@]}" "$@" 00:21:06.801 17:25:15 -- host/bdevperf.sh@36 -- # tgt_init 00:21:06.801 17:25:15 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:06.801 17:25:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:06.801 17:25:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:06.801 17:25:15 -- common/autotest_common.sh@10 -- # set +x 00:21:06.801 17:25:15 -- nvmf/common.sh@470 -- # nvmfpid=3058178 00:21:06.801 17:25:15 -- nvmf/common.sh@471 -- # waitforlisten 3058178 00:21:06.801 17:25:15 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:06.801 17:25:15 -- common/autotest_common.sh@817 -- # '[' -z 3058178 ']' 00:21:06.801 17:25:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.801 17:25:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:06.801 17:25:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.801 17:25:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:06.801 17:25:15 -- common/autotest_common.sh@10 -- # set +x 00:21:06.801 [2024-04-24 17:25:15.770348] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:21:06.801 [2024-04-24 17:25:15.770389] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.801 [2024-04-24 17:25:15.781789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:06.801 [2024-04-24 17:25:15.781808] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:06.801 [2024-04-24 17:25:15.781986] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:06.801 [2024-04-24 17:25:15.781997] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:06.801 [2024-04-24 17:25:15.782005] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:06.801 [2024-04-24 17:25:15.784732] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.801 [2024-04-24 17:25:15.789570] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:06.801 [2024-04-24 17:25:15.792021] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:06.801 [2024-04-24 17:25:15.792040] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:06.801 [2024-04-24 17:25:15.792046] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:21:06.801 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.801 [2024-04-24 17:25:15.827232] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:06.801 [2024-04-24 17:25:15.903486] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.801 [2024-04-24 17:25:15.903526] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.801 [2024-04-24 17:25:15.903533] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.801 [2024-04-24 17:25:15.903539] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.801 [2024-04-24 17:25:15.903544] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.801 [2024-04-24 17:25:15.903584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.801 [2024-04-24 17:25:15.903670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.801 [2024-04-24 17:25:15.903671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.369 17:25:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:07.369 17:25:16 -- common/autotest_common.sh@850 -- # return 0 00:21:07.369 17:25:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:07.369 17:25:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:07.369 17:25:16 -- common/autotest_common.sh@10 -- # set +x 00:21:07.369 17:25:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.369 17:25:16 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:07.370 17:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.370 17:25:16 -- common/autotest_common.sh@10 -- # set +x 00:21:07.628 [2024-04-24 17:25:16.639190] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f3a680/0x1f3eb70) succeed. 00:21:07.628 [2024-04-24 17:25:16.649004] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f3bbd0/0x1f80200) succeed. 00:21:07.628 17:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.628 17:25:16 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:07.628 17:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.628 17:25:16 -- common/autotest_common.sh@10 -- # set +x 00:21:07.628 Malloc0 00:21:07.628 17:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.628 17:25:16 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:07.628 17:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.628 17:25:16 -- common/autotest_common.sh@10 -- # set +x 00:21:07.628 17:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.628 17:25:16 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:07.628 17:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.628 17:25:16 -- common/autotest_common.sh@10 -- # set +x 00:21:07.628 17:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.628 17:25:16 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:07.628 17:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.628 17:25:16 -- common/autotest_common.sh@10 -- # set +x 00:21:07.628 [2024-04-24 17:25:16.793832] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:07.628 [2024-04-24 17:25:16.795984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:07.628 [2024-04-24 17:25:16.796012] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:07.628 [2024-04-24 17:25:16.796187] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:07.628 [2024-04-24 17:25:16.796196] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:07.628 [2024-04-24 17:25:16.796204] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:07.628 17:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.628 17:25:16 -- host/bdevperf.sh@38 -- # wait 3058094 00:21:07.628 [2024-04-24 17:25:16.798931] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.628 [2024-04-24 17:25:16.801832] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:07.628 [2024-04-24 17:25:16.848392] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:17.599 00:21:17.599 Latency(us) 00:21:17.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.599 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:17.599 Verification LBA range: start 0x0 length 0x4000 00:21:17.599 Nvme1n1 : 15.00 13328.19 52.06 10536.90 0.00 5342.69 325.73 1030600.41 00:21:17.599 =================================================================================================================== 00:21:17.599 Total : 13328.19 52.06 10536.90 0.00 5342.69 325.73 1030600.41 00:21:17.599 17:25:25 -- host/bdevperf.sh@39 -- # sync 00:21:17.599 17:25:25 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:17.599 17:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.599 17:25:25 -- common/autotest_common.sh@10 -- # set +x 00:21:17.599 17:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.600 17:25:25 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:21:17.600 17:25:25 -- host/bdevperf.sh@44 -- # nvmftestfini 00:21:17.600 17:25:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:17.600 17:25:25 -- nvmf/common.sh@117 -- # sync 00:21:17.600 17:25:25 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:17.600 17:25:25 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:17.600 17:25:25 -- nvmf/common.sh@120 -- # set +e 00:21:17.600 17:25:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:17.600 17:25:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:17.600 rmmod nvme_rdma 00:21:17.600 rmmod nvme_fabrics 00:21:17.600 17:25:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:17.600 17:25:25 -- nvmf/common.sh@124 -- # set -e 00:21:17.600 17:25:25 -- nvmf/common.sh@125 -- # return 0 00:21:17.600 17:25:25 -- nvmf/common.sh@478 -- # '[' -n 3058178 ']' 00:21:17.600 17:25:25 -- nvmf/common.sh@479 -- # killprocess 3058178 00:21:17.600 17:25:25 -- common/autotest_common.sh@936 -- # '[' -z 3058178 ']' 00:21:17.600 17:25:25 -- common/autotest_common.sh@940 -- # kill -0 3058178 00:21:17.600 17:25:25 -- common/autotest_common.sh@941 -- # uname 00:21:17.600 17:25:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:17.600 17:25:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3058178 00:21:17.600 17:25:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:17.600 17:25:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:17.600 17:25:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3058178' 00:21:17.600 killing process with pid 3058178 00:21:17.600 17:25:25 -- common/autotest_common.sh@955 -- # kill 3058178 00:21:17.600 17:25:25 -- common/autotest_common.sh@960 -- # wait 3058178 00:21:17.600 17:25:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:17.600 17:25:25 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:17.600 00:21:17.600 real 0m24.084s 00:21:17.600 user 1m4.285s 00:21:17.600 sys 0m5.051s 00:21:17.600 17:25:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:17.600 17:25:25 -- common/autotest_common.sh@10 -- # set +x 00:21:17.600 ************************************ 00:21:17.600 END TEST nvmf_bdevperf 00:21:17.600 ************************************ 00:21:17.600 17:25:25 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:21:17.600 17:25:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:17.600 17:25:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:17.600 17:25:25 -- common/autotest_common.sh@10 -- # set +x 00:21:17.600 ************************************ 00:21:17.600 START TEST nvmf_target_disconnect 00:21:17.600 ************************************ 00:21:17.600 17:25:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:21:17.600 * Looking for test storage... 00:21:17.600 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:17.600 17:25:25 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.600 17:25:25 -- nvmf/common.sh@7 -- # uname -s 00:21:17.600 17:25:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.600 17:25:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.600 17:25:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.600 17:25:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.600 17:25:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.600 17:25:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.600 17:25:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.600 17:25:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.600 17:25:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.600 17:25:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.600 17:25:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:21:17.600 17:25:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:21:17.600 17:25:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.600 17:25:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.600 17:25:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.600 17:25:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.600 17:25:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:17.600 17:25:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.600 17:25:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.600 17:25:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.600 17:25:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.600 17:25:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.600 17:25:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.600 17:25:25 -- paths/export.sh@5 -- # export PATH 00:21:17.600 17:25:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.600 17:25:25 -- nvmf/common.sh@47 -- # : 0 00:21:17.600 17:25:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:17.600 17:25:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:17.600 17:25:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.600 17:25:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.600 17:25:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.600 17:25:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:17.600 17:25:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:17.600 17:25:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:17.600 17:25:25 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:21:17.600 17:25:25 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:21:17.600 17:25:25 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:21:17.600 17:25:25 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:21:17.600 17:25:25 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:21:17.600 17:25:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.600 17:25:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:17.600 17:25:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:17.600 17:25:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:17.600 17:25:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.600 17:25:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.600 17:25:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.600 17:25:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:17.600 17:25:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:17.600 17:25:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:17.600 17:25:25 -- common/autotest_common.sh@10 -- # set +x 00:21:22.869 17:25:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:22.869 17:25:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:22.869 17:25:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:22.869 17:25:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:22.869 17:25:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:22.869 17:25:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:22.869 17:25:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:22.869 17:25:31 -- nvmf/common.sh@295 -- # net_devs=() 00:21:22.869 17:25:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:22.869 17:25:31 -- nvmf/common.sh@296 -- # e810=() 00:21:22.869 17:25:31 -- nvmf/common.sh@296 -- # local -ga e810 00:21:22.869 17:25:31 -- nvmf/common.sh@297 -- # x722=() 00:21:22.869 17:25:31 -- nvmf/common.sh@297 -- # local -ga x722 00:21:22.869 17:25:31 -- nvmf/common.sh@298 -- # mlx=() 00:21:22.869 17:25:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:22.869 17:25:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.869 17:25:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.869 17:25:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.869 17:25:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.869 17:25:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.869 17:25:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.869 17:25:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.869 17:25:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.869 17:25:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.869 17:25:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.869 17:25:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.869 17:25:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:22.869 17:25:31 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:22.869 17:25:31 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:22.869 17:25:31 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:22.869 17:25:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:22.869 17:25:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.869 17:25:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:21:22.869 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:21:22.869 17:25:31 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:22.869 17:25:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.869 17:25:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:21:22.869 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:21:22.869 17:25:31 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:22.869 17:25:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:22.869 17:25:31 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.869 17:25:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.869 17:25:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:22.869 17:25:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.869 17:25:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:21:22.869 Found net devices under 0000:da:00.0: mlx_0_0 00:21:22.869 17:25:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.869 17:25:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.869 17:25:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.869 17:25:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:22.869 17:25:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.869 17:25:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:21:22.869 Found net devices under 0000:da:00.1: mlx_0_1 00:21:22.869 17:25:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.869 17:25:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:22.869 17:25:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:22.869 17:25:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:21:22.869 17:25:31 -- nvmf/common.sh@409 -- # rdma_device_init 00:21:22.869 17:25:31 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:21:22.869 17:25:31 -- nvmf/common.sh@58 -- # uname 00:21:22.869 17:25:31 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:22.869 17:25:31 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:22.869 17:25:31 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:22.869 17:25:31 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:22.869 17:25:31 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:22.869 17:25:31 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:22.869 17:25:31 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:22.869 17:25:31 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:22.869 17:25:31 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:21:22.870 17:25:31 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:22.870 17:25:31 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:22.870 17:25:31 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:22.870 17:25:31 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:22.870 17:25:31 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:22.870 17:25:31 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:22.870 17:25:31 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:22.870 17:25:31 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:22.870 17:25:31 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:22.870 17:25:31 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:22.870 17:25:31 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:22.870 17:25:31 -- nvmf/common.sh@105 -- # continue 2 00:21:22.870 17:25:31 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:22.870 17:25:31 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:22.870 17:25:31 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:22.870 17:25:31 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:22.870 17:25:31 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:22.870 17:25:31 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:22.870 17:25:31 -- nvmf/common.sh@105 -- # continue 2 00:21:22.870 17:25:31 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:22.870 17:25:31 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:22.870 17:25:31 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:22.870 17:25:31 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:22.870 17:25:31 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:22.870 17:25:31 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:22.870 17:25:31 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:22.870 17:25:31 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:22.870 17:25:31 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:22.870 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:22.870 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:21:22.870 altname enp218s0f0np0 00:21:22.870 altname ens818f0np0 00:21:22.870 inet 192.168.100.8/24 scope global mlx_0_0 00:21:22.870 valid_lft forever preferred_lft forever 00:21:22.870 17:25:31 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:22.870 17:25:31 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:22.870 17:25:31 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:22.870 17:25:31 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:22.870 17:25:31 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:22.870 17:25:31 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:22.870 17:25:31 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:22.870 17:25:31 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:22.870 17:25:31 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:22.870 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:22.870 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:21:22.870 altname enp218s0f1np1 00:21:22.870 altname ens818f1np1 00:21:22.870 inet 192.168.100.9/24 scope global mlx_0_1 00:21:22.870 valid_lft forever preferred_lft forever 00:21:22.870 17:25:31 -- nvmf/common.sh@411 -- # return 0 00:21:22.870 17:25:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:22.870 17:25:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:22.870 17:25:31 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:21:22.870 17:25:31 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:21:22.870 17:25:31 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:22.870 17:25:31 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:22.870 17:25:31 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:22.870 17:25:31 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:22.870 17:25:31 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:22.870 17:25:31 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:22.870 17:25:31 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:22.870 17:25:31 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:22.870 17:25:31 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:22.870 17:25:31 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:22.870 17:25:31 -- nvmf/common.sh@105 -- # continue 2 00:21:22.870 17:25:31 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:22.870 17:25:31 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:22.870 17:25:31 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:22.870 17:25:31 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:22.870 17:25:31 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:22.870 17:25:31 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:22.870 17:25:31 -- nvmf/common.sh@105 -- # continue 2 00:21:22.870 17:25:31 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:22.870 17:25:31 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:22.870 17:25:31 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:22.870 17:25:31 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:22.870 17:25:31 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:22.870 17:25:31 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:22.870 17:25:31 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:22.870 17:25:31 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:22.870 17:25:31 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:22.870 17:25:31 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:22.870 17:25:31 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:22.870 17:25:31 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:22.870 17:25:31 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:21:22.870 192.168.100.9' 00:21:22.870 17:25:31 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:22.870 192.168.100.9' 00:21:22.870 17:25:31 -- nvmf/common.sh@446 -- # head -n 1 00:21:22.870 17:25:31 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:22.870 17:25:31 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:21:22.870 192.168.100.9' 00:21:22.870 17:25:31 -- nvmf/common.sh@447 -- # tail -n +2 00:21:22.870 17:25:31 -- nvmf/common.sh@447 -- # head -n 1 00:21:22.870 17:25:31 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:22.870 17:25:31 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:21:22.870 17:25:31 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:22.870 17:25:31 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:21:22.870 17:25:31 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:21:22.870 17:25:31 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:21:22.870 17:25:31 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:21:22.870 17:25:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:22.870 17:25:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:22.870 17:25:31 -- common/autotest_common.sh@10 -- # set +x 00:21:22.870 ************************************ 00:21:22.870 START TEST nvmf_target_disconnect_tc1 00:21:22.870 ************************************ 00:21:22.870 17:25:31 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:21:22.870 17:25:31 -- host/target_disconnect.sh@32 -- # set +e 00:21:22.870 17:25:31 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:22.870 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.870 [2024-04-24 17:25:31.479613] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:22.870 [2024-04-24 17:25:31.479693] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:22.870 [2024-04-24 17:25:31.479704] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7080 00:21:23.438 [2024-04-24 17:25:32.483820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:23.438 [2024-04-24 17:25:32.483886] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:23.438 [2024-04-24 17:25:32.483912] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:21:23.438 [2024-04-24 17:25:32.483961] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:23.438 [2024-04-24 17:25:32.483983] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:21:23.438 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:21:23.438 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:21:23.438 Initializing NVMe Controllers 00:21:23.438 17:25:32 -- host/target_disconnect.sh@33 -- # trap - ERR 00:21:23.438 17:25:32 -- host/target_disconnect.sh@33 -- # print_backtrace 00:21:23.438 17:25:32 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:21:23.438 17:25:32 -- common/autotest_common.sh@1139 -- # return 0 00:21:23.438 17:25:32 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:21:23.438 17:25:32 -- host/target_disconnect.sh@41 -- # set -e 00:21:23.438 00:21:23.438 real 0m1.097s 00:21:23.438 user 0m0.940s 00:21:23.438 sys 0m0.147s 00:21:23.438 17:25:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:23.438 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:21:23.438 ************************************ 00:21:23.438 END TEST nvmf_target_disconnect_tc1 00:21:23.438 ************************************ 00:21:23.438 17:25:32 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:21:23.438 17:25:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:23.438 17:25:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:23.438 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:21:23.438 ************************************ 00:21:23.438 START TEST nvmf_target_disconnect_tc2 00:21:23.438 ************************************ 00:21:23.438 17:25:32 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:21:23.438 17:25:32 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:21:23.438 17:25:32 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:23.438 17:25:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:23.438 17:25:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:23.438 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:21:23.438 17:25:32 -- nvmf/common.sh@470 -- # nvmfpid=3060581 00:21:23.438 17:25:32 -- nvmf/common.sh@471 -- # waitforlisten 3060581 00:21:23.438 17:25:32 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:23.439 17:25:32 -- common/autotest_common.sh@817 -- # '[' -z 3060581 ']' 00:21:23.439 17:25:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.439 17:25:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:23.439 17:25:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.439 17:25:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:23.439 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:21:23.439 [2024-04-24 17:25:32.662393] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:21:23.439 [2024-04-24 17:25:32.662434] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.439 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.696 [2024-04-24 17:25:32.731011] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.696 [2024-04-24 17:25:32.805298] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.696 [2024-04-24 17:25:32.805334] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.696 [2024-04-24 17:25:32.805340] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.696 [2024-04-24 17:25:32.805346] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.696 [2024-04-24 17:25:32.805350] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.696 [2024-04-24 17:25:32.805465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:23.696 [2024-04-24 17:25:32.805577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:23.696 [2024-04-24 17:25:32.805683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:23.696 [2024-04-24 17:25:32.805685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:24.259 17:25:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:24.259 17:25:33 -- common/autotest_common.sh@850 -- # return 0 00:21:24.259 17:25:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:24.259 17:25:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:24.259 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:24.259 17:25:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.259 17:25:33 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:24.259 17:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.259 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:24.517 Malloc0 00:21:24.517 17:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.517 17:25:33 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:24.517 17:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.517 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:24.517 [2024-04-24 17:25:33.538699] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf1d000/0xf28c40) succeed. 00:21:24.517 [2024-04-24 17:25:33.549088] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf1e5f0/0xfc8cd0) succeed. 00:21:24.517 17:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.517 17:25:33 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:24.517 17:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.517 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:24.517 17:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.517 17:25:33 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:24.517 17:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.517 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:24.517 17:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.517 17:25:33 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:24.517 17:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.517 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:24.517 [2024-04-24 17:25:33.693717] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:24.517 17:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.517 17:25:33 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:24.517 17:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.517 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:21:24.517 17:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.517 17:25:33 -- host/target_disconnect.sh@50 -- # reconnectpid=3060623 00:21:24.517 17:25:33 -- host/target_disconnect.sh@52 -- # sleep 2 00:21:24.517 17:25:33 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:24.517 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.036 17:25:35 -- host/target_disconnect.sh@53 -- # kill -9 3060581 00:21:27.036 17:25:35 -- host/target_disconnect.sh@55 -- # sleep 2 00:21:27.969 Read completed with error (sct=0, sc=8) 00:21:27.969 starting I/O failed 00:21:27.969 Write completed with error (sct=0, sc=8) 00:21:27.969 starting I/O failed 00:21:27.969 Read completed with error (sct=0, sc=8) 00:21:27.969 starting I/O failed 00:21:27.969 Read completed with error (sct=0, sc=8) 00:21:27.969 starting I/O failed 00:21:27.969 Write completed with error (sct=0, sc=8) 00:21:27.969 starting I/O failed 00:21:27.969 Write completed with error (sct=0, sc=8) 00:21:27.969 starting I/O failed 00:21:27.969 Read completed with error (sct=0, sc=8) 00:21:27.969 starting I/O failed 00:21:27.969 Read completed with error (sct=0, sc=8) 00:21:27.969 starting I/O failed 00:21:27.969 Write completed with error (sct=0, sc=8) 00:21:27.969 starting I/O failed 00:21:27.969 Read completed with error (sct=0, sc=8) 00:21:27.969 starting I/O failed 00:21:27.969 Read completed with error (sct=0, sc=8) 00:21:27.969 starting I/O failed 00:21:27.969 Write completed with error (sct=0, sc=8) 00:21:27.969 starting I/O failed 00:21:27.969 Read completed with error (sct=0, sc=8) 00:21:27.969 starting I/O failed 00:21:27.969 Write completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Write completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Write completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Write completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Write completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Write completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Write completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Read completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Read completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Write completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Write completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Write completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Write completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Read completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Write completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Read completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Write completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Read completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 Read completed with error (sct=0, sc=8) 00:21:27.970 starting I/O failed 00:21:27.970 [2024-04-24 17:25:36.870058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:28.536 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3060581 Killed "${NVMF_APP[@]}" "$@" 00:21:28.536 17:25:37 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:21:28.536 17:25:37 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:28.536 17:25:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:28.536 17:25:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:28.536 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:21:28.536 17:25:37 -- nvmf/common.sh@470 -- # nvmfpid=3060676 00:21:28.536 17:25:37 -- nvmf/common.sh@471 -- # waitforlisten 3060676 00:21:28.536 17:25:37 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:28.536 17:25:37 -- common/autotest_common.sh@817 -- # '[' -z 3060676 ']' 00:21:28.536 17:25:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.536 17:25:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:28.536 17:25:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.536 17:25:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:28.536 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:21:28.536 [2024-04-24 17:25:37.766980] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:21:28.536 [2024-04-24 17:25:37.767034] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.793 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.793 [2024-04-24 17:25:37.837032] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Read completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Read completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Read completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Read completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Read completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Read completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Read completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Read completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Read completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Read completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Read completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Read completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Read completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Read completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 Write completed with error (sct=0, sc=8) 00:21:28.793 starting I/O failed 00:21:28.793 [2024-04-24 17:25:37.875376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:28.794 [2024-04-24 17:25:37.912662] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.794 [2024-04-24 17:25:37.912699] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.794 [2024-04-24 17:25:37.912706] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.794 [2024-04-24 17:25:37.912712] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.794 [2024-04-24 17:25:37.912717] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.794 [2024-04-24 17:25:37.912869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:28.794 [2024-04-24 17:25:37.912934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:28.794 [2024-04-24 17:25:37.913038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:28.794 [2024-04-24 17:25:37.913040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:29.356 17:25:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:29.356 17:25:38 -- common/autotest_common.sh@850 -- # return 0 00:21:29.356 17:25:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:29.356 17:25:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:29.356 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:21:29.613 17:25:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.613 17:25:38 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:29.613 17:25:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.613 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:21:29.613 Malloc0 00:21:29.613 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.613 17:25:38 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:29.613 17:25:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.613 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:21:29.613 [2024-04-24 17:25:38.660857] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a8e000/0x1a99c40) succeed. 00:21:29.613 [2024-04-24 17:25:38.671552] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a8f5f0/0x1b39cd0) succeed. 00:21:29.613 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.613 17:25:38 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:29.613 17:25:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.613 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:21:29.613 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.613 17:25:38 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:29.613 17:25:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.613 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:21:29.613 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.613 17:25:38 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:29.613 17:25:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.613 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:21:29.613 [2024-04-24 17:25:38.814648] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:29.613 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.613 17:25:38 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:29.613 17:25:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.613 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:21:29.613 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.613 17:25:38 -- host/target_disconnect.sh@58 -- # wait 3060623 00:21:29.904 Read completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Read completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Read completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Read completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Read completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Read completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Read completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Read completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Read completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Read completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Read completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Read completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Write completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 Read completed with error (sct=0, sc=8) 00:21:29.904 starting I/O failed 00:21:29.904 [2024-04-24 17:25:38.880639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.904 [2024-04-24 17:25:38.888663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.904 [2024-04-24 17:25:38.888718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.904 [2024-04-24 17:25:38.888739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.904 [2024-04-24 17:25:38.888747] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.904 [2024-04-24 17:25:38.888753] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:29.904 [2024-04-24 17:25:38.898920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.904 qpair failed and we were unable to recover it. 00:21:29.904 [2024-04-24 17:25:38.908691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.904 [2024-04-24 17:25:38.908728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.904 [2024-04-24 17:25:38.908744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.904 [2024-04-24 17:25:38.908751] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.904 [2024-04-24 17:25:38.908757] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:29.904 [2024-04-24 17:25:38.918973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.904 qpair failed and we were unable to recover it. 00:21:29.904 [2024-04-24 17:25:38.928768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.904 [2024-04-24 17:25:38.928805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.904 [2024-04-24 17:25:38.928821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.904 [2024-04-24 17:25:38.928832] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.904 [2024-04-24 17:25:38.928839] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:29.904 [2024-04-24 17:25:38.939044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.904 qpair failed and we were unable to recover it. 00:21:29.904 [2024-04-24 17:25:38.948861] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.904 [2024-04-24 17:25:38.948905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.904 [2024-04-24 17:25:38.948919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.904 [2024-04-24 17:25:38.948926] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.904 [2024-04-24 17:25:38.948933] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:29.904 [2024-04-24 17:25:38.959166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.904 qpair failed and we were unable to recover it. 00:21:29.904 [2024-04-24 17:25:38.968839] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.904 [2024-04-24 17:25:38.968881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.904 [2024-04-24 17:25:38.968898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.904 [2024-04-24 17:25:38.968909] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.904 [2024-04-24 17:25:38.968915] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:29.904 [2024-04-24 17:25:38.979038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.904 qpair failed and we were unable to recover it. 00:21:29.904 [2024-04-24 17:25:38.988816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.904 [2024-04-24 17:25:38.988857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.904 [2024-04-24 17:25:38.988872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.904 [2024-04-24 17:25:38.988879] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.904 [2024-04-24 17:25:38.988885] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:29.904 [2024-04-24 17:25:38.999183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.904 qpair failed and we were unable to recover it. 00:21:29.904 [2024-04-24 17:25:39.008971] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.904 [2024-04-24 17:25:39.009016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.904 [2024-04-24 17:25:39.009031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.904 [2024-04-24 17:25:39.009038] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.904 [2024-04-24 17:25:39.009044] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:29.904 [2024-04-24 17:25:39.019344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.904 qpair failed and we were unable to recover it. 00:21:29.904 [2024-04-24 17:25:39.029039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.904 [2024-04-24 17:25:39.029079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.904 [2024-04-24 17:25:39.029094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.904 [2024-04-24 17:25:39.029101] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.904 [2024-04-24 17:25:39.029107] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:29.905 [2024-04-24 17:25:39.039166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.905 qpair failed and we were unable to recover it. 00:21:29.905 [2024-04-24 17:25:39.049114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.905 [2024-04-24 17:25:39.049163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.905 [2024-04-24 17:25:39.049178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.905 [2024-04-24 17:25:39.049184] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.905 [2024-04-24 17:25:39.049191] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:29.905 [2024-04-24 17:25:39.059511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.905 qpair failed and we were unable to recover it. 00:21:29.905 [2024-04-24 17:25:39.069049] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.905 [2024-04-24 17:25:39.069089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.905 [2024-04-24 17:25:39.069105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.905 [2024-04-24 17:25:39.069112] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.905 [2024-04-24 17:25:39.069118] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:29.905 [2024-04-24 17:25:39.079406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.905 qpair failed and we were unable to recover it. 00:21:29.905 [2024-04-24 17:25:39.089219] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.905 [2024-04-24 17:25:39.089265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.905 [2024-04-24 17:25:39.089279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.905 [2024-04-24 17:25:39.089286] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.905 [2024-04-24 17:25:39.089292] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:29.905 [2024-04-24 17:25:39.099480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.905 qpair failed and we were unable to recover it. 00:21:29.905 [2024-04-24 17:25:39.109230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:29.905 [2024-04-24 17:25:39.109270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:29.905 [2024-04-24 17:25:39.109283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:29.905 [2024-04-24 17:25:39.109290] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:29.905 [2024-04-24 17:25:39.109296] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:29.905 [2024-04-24 17:25:39.119421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.905 qpair failed and we were unable to recover it. 00:21:30.182 [2024-04-24 17:25:39.129297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.182 [2024-04-24 17:25:39.129341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.182 [2024-04-24 17:25:39.129356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.182 [2024-04-24 17:25:39.129363] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.182 [2024-04-24 17:25:39.129369] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.182 [2024-04-24 17:25:39.139650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.182 qpair failed and we were unable to recover it. 00:21:30.182 [2024-04-24 17:25:39.149343] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.182 [2024-04-24 17:25:39.149379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.182 [2024-04-24 17:25:39.149396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.182 [2024-04-24 17:25:39.149403] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.182 [2024-04-24 17:25:39.149409] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.182 [2024-04-24 17:25:39.159636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.182 qpair failed and we were unable to recover it. 00:21:30.182 [2024-04-24 17:25:39.169436] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.182 [2024-04-24 17:25:39.169478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.182 [2024-04-24 17:25:39.169494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.182 [2024-04-24 17:25:39.169501] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.182 [2024-04-24 17:25:39.169507] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.182 [2024-04-24 17:25:39.179743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.182 qpair failed and we were unable to recover it. 00:21:30.182 [2024-04-24 17:25:39.189336] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.182 [2024-04-24 17:25:39.189378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.182 [2024-04-24 17:25:39.189394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.182 [2024-04-24 17:25:39.189401] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.182 [2024-04-24 17:25:39.189407] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.182 [2024-04-24 17:25:39.199700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.182 qpair failed and we were unable to recover it. 00:21:30.182 [2024-04-24 17:25:39.209497] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.182 [2024-04-24 17:25:39.209541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.182 [2024-04-24 17:25:39.209555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.182 [2024-04-24 17:25:39.209562] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.182 [2024-04-24 17:25:39.209568] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.182 [2024-04-24 17:25:39.219800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.182 qpair failed and we were unable to recover it. 00:21:30.182 [2024-04-24 17:25:39.229652] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.182 [2024-04-24 17:25:39.229691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.182 [2024-04-24 17:25:39.229706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.182 [2024-04-24 17:25:39.229713] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.182 [2024-04-24 17:25:39.229719] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.182 [2024-04-24 17:25:39.239887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.182 qpair failed and we were unable to recover it. 00:21:30.182 [2024-04-24 17:25:39.249692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.182 [2024-04-24 17:25:39.249734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.182 [2024-04-24 17:25:39.249749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.182 [2024-04-24 17:25:39.249755] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.182 [2024-04-24 17:25:39.249761] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.182 [2024-04-24 17:25:39.259951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.182 qpair failed and we were unable to recover it. 00:21:30.182 [2024-04-24 17:25:39.269646] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.182 [2024-04-24 17:25:39.269683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.182 [2024-04-24 17:25:39.269697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.182 [2024-04-24 17:25:39.269704] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.182 [2024-04-24 17:25:39.269710] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.183 [2024-04-24 17:25:39.279850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.183 qpair failed and we were unable to recover it. 00:21:30.183 [2024-04-24 17:25:39.289651] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.183 [2024-04-24 17:25:39.289688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.183 [2024-04-24 17:25:39.289702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.183 [2024-04-24 17:25:39.289709] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.183 [2024-04-24 17:25:39.289715] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.183 [2024-04-24 17:25:39.300075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.183 qpair failed and we were unable to recover it. 00:21:30.183 [2024-04-24 17:25:39.309761] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.183 [2024-04-24 17:25:39.309802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.183 [2024-04-24 17:25:39.309817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.183 [2024-04-24 17:25:39.309824] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.183 [2024-04-24 17:25:39.309836] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.183 [2024-04-24 17:25:39.319986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.183 qpair failed and we were unable to recover it. 00:21:30.183 [2024-04-24 17:25:39.329778] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.183 [2024-04-24 17:25:39.329816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.183 [2024-04-24 17:25:39.329835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.183 [2024-04-24 17:25:39.329843] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.183 [2024-04-24 17:25:39.329849] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.183 [2024-04-24 17:25:39.340300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.183 qpair failed and we were unable to recover it. 00:21:30.183 [2024-04-24 17:25:39.349853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.183 [2024-04-24 17:25:39.349892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.183 [2024-04-24 17:25:39.349906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.183 [2024-04-24 17:25:39.349913] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.183 [2024-04-24 17:25:39.349919] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.183 [2024-04-24 17:25:39.360297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.183 qpair failed and we were unable to recover it. 00:21:30.183 [2024-04-24 17:25:39.369931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.183 [2024-04-24 17:25:39.369970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.183 [2024-04-24 17:25:39.369985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.183 [2024-04-24 17:25:39.369992] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.183 [2024-04-24 17:25:39.369998] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.183 [2024-04-24 17:25:39.380281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.183 qpair failed and we were unable to recover it. 00:21:30.183 [2024-04-24 17:25:39.389972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.183 [2024-04-24 17:25:39.390008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.183 [2024-04-24 17:25:39.390022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.183 [2024-04-24 17:25:39.390029] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.183 [2024-04-24 17:25:39.390035] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.183 [2024-04-24 17:25:39.400330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.183 qpair failed and we were unable to recover it. 00:21:30.183 [2024-04-24 17:25:39.410073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.183 [2024-04-24 17:25:39.410110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.183 [2024-04-24 17:25:39.410124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.183 [2024-04-24 17:25:39.410133] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.183 [2024-04-24 17:25:39.410139] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.183 [2024-04-24 17:25:39.420288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.183 qpair failed and we were unable to recover it. 00:21:30.466 [2024-04-24 17:25:39.430169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.466 [2024-04-24 17:25:39.430214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.466 [2024-04-24 17:25:39.430230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.466 [2024-04-24 17:25:39.430237] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.466 [2024-04-24 17:25:39.430244] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.466 [2024-04-24 17:25:39.440494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.466 qpair failed and we were unable to recover it. 00:21:30.466 [2024-04-24 17:25:39.450286] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.466 [2024-04-24 17:25:39.450331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.466 [2024-04-24 17:25:39.450346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.466 [2024-04-24 17:25:39.450353] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.466 [2024-04-24 17:25:39.450359] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.466 [2024-04-24 17:25:39.460499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.466 qpair failed and we were unable to recover it. 00:21:30.466 [2024-04-24 17:25:39.470194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.466 [2024-04-24 17:25:39.470235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.466 [2024-04-24 17:25:39.470249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.466 [2024-04-24 17:25:39.470256] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.466 [2024-04-24 17:25:39.470262] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.466 [2024-04-24 17:25:39.480598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.466 qpair failed and we were unable to recover it. 00:21:30.466 [2024-04-24 17:25:39.490284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.466 [2024-04-24 17:25:39.490324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.466 [2024-04-24 17:25:39.490339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.466 [2024-04-24 17:25:39.490346] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.466 [2024-04-24 17:25:39.490353] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.466 [2024-04-24 17:25:39.500692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.466 qpair failed and we were unable to recover it. 00:21:30.466 [2024-04-24 17:25:39.510398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.466 [2024-04-24 17:25:39.510440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.466 [2024-04-24 17:25:39.510455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.466 [2024-04-24 17:25:39.510462] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.466 [2024-04-24 17:25:39.510469] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.466 [2024-04-24 17:25:39.520670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.466 qpair failed and we were unable to recover it. 00:21:30.466 [2024-04-24 17:25:39.530447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.466 [2024-04-24 17:25:39.530484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.466 [2024-04-24 17:25:39.530499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.466 [2024-04-24 17:25:39.530505] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.466 [2024-04-24 17:25:39.530512] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.466 [2024-04-24 17:25:39.540852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.466 qpair failed and we were unable to recover it. 00:21:30.466 [2024-04-24 17:25:39.550431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.466 [2024-04-24 17:25:39.550468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.466 [2024-04-24 17:25:39.550483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.466 [2024-04-24 17:25:39.550490] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.466 [2024-04-24 17:25:39.550496] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.466 [2024-04-24 17:25:39.560836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.466 qpair failed and we were unable to recover it. 00:21:30.466 [2024-04-24 17:25:39.570534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.467 [2024-04-24 17:25:39.570574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.467 [2024-04-24 17:25:39.570588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.467 [2024-04-24 17:25:39.570595] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.467 [2024-04-24 17:25:39.570601] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.467 [2024-04-24 17:25:39.580838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.467 qpair failed and we were unable to recover it. 00:21:30.467 [2024-04-24 17:25:39.590549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.467 [2024-04-24 17:25:39.590588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.467 [2024-04-24 17:25:39.590606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.467 [2024-04-24 17:25:39.590613] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.467 [2024-04-24 17:25:39.590619] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.467 [2024-04-24 17:25:39.600952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.467 qpair failed and we were unable to recover it. 00:21:30.467 [2024-04-24 17:25:39.610670] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.467 [2024-04-24 17:25:39.610711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.467 [2024-04-24 17:25:39.610725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.467 [2024-04-24 17:25:39.610732] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.467 [2024-04-24 17:25:39.610738] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.467 [2024-04-24 17:25:39.620961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.467 qpair failed and we were unable to recover it. 00:21:30.467 [2024-04-24 17:25:39.630694] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.467 [2024-04-24 17:25:39.630730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.467 [2024-04-24 17:25:39.630744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.467 [2024-04-24 17:25:39.630751] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.467 [2024-04-24 17:25:39.630757] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.467 [2024-04-24 17:25:39.640967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.467 qpair failed and we were unable to recover it. 00:21:30.467 [2024-04-24 17:25:39.650815] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.467 [2024-04-24 17:25:39.650859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.467 [2024-04-24 17:25:39.650874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.467 [2024-04-24 17:25:39.650881] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.467 [2024-04-24 17:25:39.650887] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.467 [2024-04-24 17:25:39.661168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.467 qpair failed and we were unable to recover it. 00:21:30.467 [2024-04-24 17:25:39.670878] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.467 [2024-04-24 17:25:39.670918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.467 [2024-04-24 17:25:39.670932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.467 [2024-04-24 17:25:39.670939] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.467 [2024-04-24 17:25:39.670945] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.467 [2024-04-24 17:25:39.681342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.467 qpair failed and we were unable to recover it. 00:21:30.467 [2024-04-24 17:25:39.690841] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.467 [2024-04-24 17:25:39.690884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.467 [2024-04-24 17:25:39.690899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.467 [2024-04-24 17:25:39.690906] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.467 [2024-04-24 17:25:39.690913] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.467 [2024-04-24 17:25:39.701374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.467 qpair failed and we were unable to recover it. 00:21:30.467 [2024-04-24 17:25:39.710976] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.467 [2024-04-24 17:25:39.711022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.467 [2024-04-24 17:25:39.711037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.467 [2024-04-24 17:25:39.711044] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.467 [2024-04-24 17:25:39.711050] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.724 [2024-04-24 17:25:39.721317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.724 qpair failed and we were unable to recover it. 00:21:30.724 [2024-04-24 17:25:39.731008] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.724 [2024-04-24 17:25:39.731054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.724 [2024-04-24 17:25:39.731068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.724 [2024-04-24 17:25:39.731075] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.724 [2024-04-24 17:25:39.731081] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.724 [2024-04-24 17:25:39.741729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.724 qpair failed and we were unable to recover it. 00:21:30.724 [2024-04-24 17:25:39.751221] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.724 [2024-04-24 17:25:39.751262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.724 [2024-04-24 17:25:39.751276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.724 [2024-04-24 17:25:39.751283] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.724 [2024-04-24 17:25:39.751289] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.724 [2024-04-24 17:25:39.761631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.724 qpair failed and we were unable to recover it. 00:21:30.724 [2024-04-24 17:25:39.771258] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.724 [2024-04-24 17:25:39.771306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.724 [2024-04-24 17:25:39.771321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.724 [2024-04-24 17:25:39.771328] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.724 [2024-04-24 17:25:39.771334] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.724 [2024-04-24 17:25:39.781804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.724 qpair failed and we were unable to recover it. 00:21:30.724 [2024-04-24 17:25:39.791356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.724 [2024-04-24 17:25:39.791397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.724 [2024-04-24 17:25:39.791411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.724 [2024-04-24 17:25:39.791419] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.724 [2024-04-24 17:25:39.791425] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.724 [2024-04-24 17:25:39.801873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.724 qpair failed and we were unable to recover it. 00:21:30.724 [2024-04-24 17:25:39.811416] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.724 [2024-04-24 17:25:39.811453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.724 [2024-04-24 17:25:39.811467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.724 [2024-04-24 17:25:39.811474] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.724 [2024-04-24 17:25:39.811480] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.724 [2024-04-24 17:25:39.821980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.724 qpair failed and we were unable to recover it. 00:21:30.724 [2024-04-24 17:25:39.831463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.724 [2024-04-24 17:25:39.831502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.724 [2024-04-24 17:25:39.831516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.724 [2024-04-24 17:25:39.831523] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.724 [2024-04-24 17:25:39.831529] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.724 [2024-04-24 17:25:39.841843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.724 qpair failed and we were unable to recover it. 00:21:30.724 [2024-04-24 17:25:39.851502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.724 [2024-04-24 17:25:39.851546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.724 [2024-04-24 17:25:39.851560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.724 [2024-04-24 17:25:39.851570] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.724 [2024-04-24 17:25:39.851576] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.724 [2024-04-24 17:25:39.862059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.724 qpair failed and we were unable to recover it. 00:21:30.724 [2024-04-24 17:25:39.871539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.724 [2024-04-24 17:25:39.871579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.724 [2024-04-24 17:25:39.871593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.724 [2024-04-24 17:25:39.871600] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.724 [2024-04-24 17:25:39.871606] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.724 [2024-04-24 17:25:39.882036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.724 qpair failed and we were unable to recover it. 00:21:30.724 [2024-04-24 17:25:39.891541] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.724 [2024-04-24 17:25:39.891576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.724 [2024-04-24 17:25:39.891590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.724 [2024-04-24 17:25:39.891597] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.724 [2024-04-24 17:25:39.891603] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.724 [2024-04-24 17:25:39.902197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.724 qpair failed and we were unable to recover it. 00:21:30.724 [2024-04-24 17:25:39.911545] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.725 [2024-04-24 17:25:39.911585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.725 [2024-04-24 17:25:39.911599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.725 [2024-04-24 17:25:39.911606] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.725 [2024-04-24 17:25:39.911612] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.725 [2024-04-24 17:25:39.922259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.725 qpair failed and we were unable to recover it. 00:21:30.725 [2024-04-24 17:25:39.931735] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.725 [2024-04-24 17:25:39.931773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.725 [2024-04-24 17:25:39.931789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.725 [2024-04-24 17:25:39.931795] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.725 [2024-04-24 17:25:39.931802] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.725 [2024-04-24 17:25:39.942261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.725 qpair failed and we were unable to recover it. 00:21:30.725 [2024-04-24 17:25:39.951682] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.725 [2024-04-24 17:25:39.951717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.725 [2024-04-24 17:25:39.951734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.725 [2024-04-24 17:25:39.951741] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.725 [2024-04-24 17:25:39.951747] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.725 [2024-04-24 17:25:39.962223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.725 qpair failed and we were unable to recover it. 00:21:30.982 [2024-04-24 17:25:39.971906] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.982 [2024-04-24 17:25:39.971950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.982 [2024-04-24 17:25:39.971966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.982 [2024-04-24 17:25:39.971974] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.982 [2024-04-24 17:25:39.971980] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.982 [2024-04-24 17:25:39.982385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.982 qpair failed and we were unable to recover it. 00:21:30.982 [2024-04-24 17:25:39.991940] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.982 [2024-04-24 17:25:39.991984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.982 [2024-04-24 17:25:39.991999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.982 [2024-04-24 17:25:39.992006] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.982 [2024-04-24 17:25:39.992012] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.982 [2024-04-24 17:25:40.002467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.982 qpair failed and we were unable to recover it. 00:21:30.982 [2024-04-24 17:25:40.011934] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.982 [2024-04-24 17:25:40.011979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.982 [2024-04-24 17:25:40.011993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.982 [2024-04-24 17:25:40.012000] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.982 [2024-04-24 17:25:40.012007] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.982 [2024-04-24 17:25:40.022575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.982 qpair failed and we were unable to recover it. 00:21:30.982 [2024-04-24 17:25:40.031999] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.982 [2024-04-24 17:25:40.032038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.982 [2024-04-24 17:25:40.032059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.982 [2024-04-24 17:25:40.032066] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.982 [2024-04-24 17:25:40.032073] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.982 [2024-04-24 17:25:40.042372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.982 qpair failed and we were unable to recover it. 00:21:30.982 [2024-04-24 17:25:40.052027] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.982 [2024-04-24 17:25:40.052070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.982 [2024-04-24 17:25:40.052084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.982 [2024-04-24 17:25:40.052092] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.982 [2024-04-24 17:25:40.052098] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.982 [2024-04-24 17:25:40.062426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.982 qpair failed and we were unable to recover it. 00:21:30.982 [2024-04-24 17:25:40.072200] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.982 [2024-04-24 17:25:40.072241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.982 [2024-04-24 17:25:40.072259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.982 [2024-04-24 17:25:40.072266] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.982 [2024-04-24 17:25:40.072273] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.982 [2024-04-24 17:25:40.082620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.982 qpair failed and we were unable to recover it. 00:21:30.982 [2024-04-24 17:25:40.092193] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.982 [2024-04-24 17:25:40.092238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.982 [2024-04-24 17:25:40.092253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.982 [2024-04-24 17:25:40.092260] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.982 [2024-04-24 17:25:40.092266] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.982 [2024-04-24 17:25:40.102599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.982 qpair failed and we were unable to recover it. 00:21:30.982 [2024-04-24 17:25:40.112290] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.982 [2024-04-24 17:25:40.112326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.982 [2024-04-24 17:25:40.112340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.982 [2024-04-24 17:25:40.112348] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.982 [2024-04-24 17:25:40.112354] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.982 [2024-04-24 17:25:40.122742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.982 qpair failed and we were unable to recover it. 00:21:30.982 [2024-04-24 17:25:40.132328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.982 [2024-04-24 17:25:40.132367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.982 [2024-04-24 17:25:40.132382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.982 [2024-04-24 17:25:40.132389] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.982 [2024-04-24 17:25:40.132395] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.982 [2024-04-24 17:25:40.143053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.982 qpair failed and we were unable to recover it. 00:21:30.982 [2024-04-24 17:25:40.152469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.982 [2024-04-24 17:25:40.152507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.982 [2024-04-24 17:25:40.152521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.982 [2024-04-24 17:25:40.152528] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.982 [2024-04-24 17:25:40.152535] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.982 [2024-04-24 17:25:40.162745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.982 qpair failed and we were unable to recover it. 00:21:30.982 [2024-04-24 17:25:40.172456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.982 [2024-04-24 17:25:40.172495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.982 [2024-04-24 17:25:40.172509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.983 [2024-04-24 17:25:40.172516] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.983 [2024-04-24 17:25:40.172522] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.983 [2024-04-24 17:25:40.182805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.983 qpair failed and we were unable to recover it. 00:21:30.983 [2024-04-24 17:25:40.192568] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.983 [2024-04-24 17:25:40.192606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.983 [2024-04-24 17:25:40.192621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.983 [2024-04-24 17:25:40.192628] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.983 [2024-04-24 17:25:40.192634] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.983 [2024-04-24 17:25:40.202951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.983 qpair failed and we were unable to recover it. 00:21:30.983 [2024-04-24 17:25:40.212593] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:30.983 [2024-04-24 17:25:40.212634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:30.983 [2024-04-24 17:25:40.212648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:30.983 [2024-04-24 17:25:40.212655] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:30.983 [2024-04-24 17:25:40.212661] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:30.983 [2024-04-24 17:25:40.223051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.983 qpair failed and we were unable to recover it. 00:21:31.240 [2024-04-24 17:25:40.232618] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.240 [2024-04-24 17:25:40.232667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.240 [2024-04-24 17:25:40.232681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.240 [2024-04-24 17:25:40.232688] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.240 [2024-04-24 17:25:40.232695] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.240 [2024-04-24 17:25:40.243118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.240 qpair failed and we were unable to recover it. 00:21:31.240 [2024-04-24 17:25:40.252768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.240 [2024-04-24 17:25:40.252808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.240 [2024-04-24 17:25:40.252822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.240 [2024-04-24 17:25:40.252833] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.240 [2024-04-24 17:25:40.252840] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.240 [2024-04-24 17:25:40.262958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.240 qpair failed and we were unable to recover it. 00:21:31.240 [2024-04-24 17:25:40.272688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.240 [2024-04-24 17:25:40.272723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.240 [2024-04-24 17:25:40.272737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.240 [2024-04-24 17:25:40.272744] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.240 [2024-04-24 17:25:40.272750] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.240 [2024-04-24 17:25:40.283214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.240 qpair failed and we were unable to recover it. 00:21:31.240 [2024-04-24 17:25:40.292756] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.240 [2024-04-24 17:25:40.292792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.240 [2024-04-24 17:25:40.292806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.240 [2024-04-24 17:25:40.292817] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.240 [2024-04-24 17:25:40.292823] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.240 [2024-04-24 17:25:40.303213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.240 qpair failed and we were unable to recover it. 00:21:31.240 [2024-04-24 17:25:40.312892] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.240 [2024-04-24 17:25:40.312934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.240 [2024-04-24 17:25:40.312948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.240 [2024-04-24 17:25:40.312955] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.240 [2024-04-24 17:25:40.312961] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.240 [2024-04-24 17:25:40.323519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.240 qpair failed and we were unable to recover it. 00:21:31.240 [2024-04-24 17:25:40.332887] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.240 [2024-04-24 17:25:40.332928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.240 [2024-04-24 17:25:40.332942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.240 [2024-04-24 17:25:40.332949] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.240 [2024-04-24 17:25:40.332955] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.240 [2024-04-24 17:25:40.343405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.240 qpair failed and we were unable to recover it. 00:21:31.240 [2024-04-24 17:25:40.353041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.240 [2024-04-24 17:25:40.353081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.240 [2024-04-24 17:25:40.353095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.240 [2024-04-24 17:25:40.353102] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.240 [2024-04-24 17:25:40.353108] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.240 [2024-04-24 17:25:40.363304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.240 qpair failed and we were unable to recover it. 00:21:31.240 [2024-04-24 17:25:40.373053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.240 [2024-04-24 17:25:40.373095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.240 [2024-04-24 17:25:40.373109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.240 [2024-04-24 17:25:40.373116] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.240 [2024-04-24 17:25:40.373122] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.240 [2024-04-24 17:25:40.383390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.240 qpair failed and we were unable to recover it. 00:21:31.241 [2024-04-24 17:25:40.393021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.241 [2024-04-24 17:25:40.393061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.241 [2024-04-24 17:25:40.393076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.241 [2024-04-24 17:25:40.393083] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.241 [2024-04-24 17:25:40.393090] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.241 [2024-04-24 17:25:40.403436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.241 qpair failed and we were unable to recover it. 00:21:31.241 [2024-04-24 17:25:40.413193] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.241 [2024-04-24 17:25:40.413232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.241 [2024-04-24 17:25:40.413247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.241 [2024-04-24 17:25:40.413255] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.241 [2024-04-24 17:25:40.413262] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.241 [2024-04-24 17:25:40.423519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.241 qpair failed and we were unable to recover it. 00:21:31.241 [2024-04-24 17:25:40.433280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.241 [2024-04-24 17:25:40.433316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.241 [2024-04-24 17:25:40.433331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.241 [2024-04-24 17:25:40.433338] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.241 [2024-04-24 17:25:40.433344] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.241 [2024-04-24 17:25:40.443784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.241 qpair failed and we were unable to recover it. 00:21:31.241 [2024-04-24 17:25:40.453335] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.241 [2024-04-24 17:25:40.453373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.241 [2024-04-24 17:25:40.453387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.241 [2024-04-24 17:25:40.453395] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.241 [2024-04-24 17:25:40.453401] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.241 [2024-04-24 17:25:40.463878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.241 qpair failed and we were unable to recover it. 00:21:31.241 [2024-04-24 17:25:40.473402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.241 [2024-04-24 17:25:40.473440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.241 [2024-04-24 17:25:40.473457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.241 [2024-04-24 17:25:40.473464] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.241 [2024-04-24 17:25:40.473470] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.241 [2024-04-24 17:25:40.483749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.241 qpair failed and we were unable to recover it. 00:21:31.497 [2024-04-24 17:25:40.493444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.497 [2024-04-24 17:25:40.493489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.497 [2024-04-24 17:25:40.493503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.497 [2024-04-24 17:25:40.493510] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.497 [2024-04-24 17:25:40.493515] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.497 [2024-04-24 17:25:40.503979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.497 qpair failed and we were unable to recover it. 00:21:31.497 [2024-04-24 17:25:40.513522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.497 [2024-04-24 17:25:40.513563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.497 [2024-04-24 17:25:40.513579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.497 [2024-04-24 17:25:40.513586] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.497 [2024-04-24 17:25:40.513592] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.497 [2024-04-24 17:25:40.523734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.497 qpair failed and we were unable to recover it. 00:21:31.497 [2024-04-24 17:25:40.533508] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.497 [2024-04-24 17:25:40.533543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.497 [2024-04-24 17:25:40.533558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.497 [2024-04-24 17:25:40.533565] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.497 [2024-04-24 17:25:40.533571] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.497 [2024-04-24 17:25:40.544168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.497 qpair failed and we were unable to recover it. 00:21:31.497 [2024-04-24 17:25:40.553637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.497 [2024-04-24 17:25:40.553678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.497 [2024-04-24 17:25:40.553692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.497 [2024-04-24 17:25:40.553699] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.497 [2024-04-24 17:25:40.553705] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.498 [2024-04-24 17:25:40.563950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.498 qpair failed and we were unable to recover it. 00:21:31.498 [2024-04-24 17:25:40.573564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.498 [2024-04-24 17:25:40.573604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.498 [2024-04-24 17:25:40.573618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.498 [2024-04-24 17:25:40.573626] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.498 [2024-04-24 17:25:40.573632] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.498 [2024-04-24 17:25:40.583734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.498 qpair failed and we were unable to recover it. 00:21:31.498 [2024-04-24 17:25:40.593604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.498 [2024-04-24 17:25:40.593640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.498 [2024-04-24 17:25:40.593655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.498 [2024-04-24 17:25:40.593662] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.498 [2024-04-24 17:25:40.593667] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.498 [2024-04-24 17:25:40.604353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.498 qpair failed and we were unable to recover it. 00:21:31.498 [2024-04-24 17:25:40.613769] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.498 [2024-04-24 17:25:40.613807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.498 [2024-04-24 17:25:40.613822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.498 [2024-04-24 17:25:40.613835] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.498 [2024-04-24 17:25:40.613841] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.498 [2024-04-24 17:25:40.624212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.498 qpair failed and we were unable to recover it. 00:21:31.498 [2024-04-24 17:25:40.633786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.498 [2024-04-24 17:25:40.633823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.498 [2024-04-24 17:25:40.633842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.498 [2024-04-24 17:25:40.633848] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.498 [2024-04-24 17:25:40.633855] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.498 [2024-04-24 17:25:40.644067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.498 qpair failed and we were unable to recover it. 00:21:31.498 [2024-04-24 17:25:40.653927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.498 [2024-04-24 17:25:40.653969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.498 [2024-04-24 17:25:40.653984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.498 [2024-04-24 17:25:40.653991] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.498 [2024-04-24 17:25:40.653997] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.498 [2024-04-24 17:25:40.664194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.498 qpair failed and we were unable to recover it. 00:21:31.498 [2024-04-24 17:25:40.673898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.498 [2024-04-24 17:25:40.673936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.498 [2024-04-24 17:25:40.673951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.498 [2024-04-24 17:25:40.673958] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.498 [2024-04-24 17:25:40.673964] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.498 [2024-04-24 17:25:40.684350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.498 qpair failed and we were unable to recover it. 00:21:31.498 [2024-04-24 17:25:40.694083] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.498 [2024-04-24 17:25:40.694120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.498 [2024-04-24 17:25:40.694135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.498 [2024-04-24 17:25:40.694141] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.498 [2024-04-24 17:25:40.694148] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.498 [2024-04-24 17:25:40.704486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.498 qpair failed and we were unable to recover it. 00:21:31.498 [2024-04-24 17:25:40.714165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.498 [2024-04-24 17:25:40.714206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.498 [2024-04-24 17:25:40.714221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.498 [2024-04-24 17:25:40.714227] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.498 [2024-04-24 17:25:40.714233] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.498 [2024-04-24 17:25:40.724615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.498 qpair failed and we were unable to recover it. 00:21:31.498 [2024-04-24 17:25:40.734190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.498 [2024-04-24 17:25:40.734230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.498 [2024-04-24 17:25:40.734244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.498 [2024-04-24 17:25:40.734254] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.498 [2024-04-24 17:25:40.734260] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.498 [2024-04-24 17:25:40.744642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.498 qpair failed and we were unable to recover it. 00:21:31.755 [2024-04-24 17:25:40.754176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.755 [2024-04-24 17:25:40.754224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.755 [2024-04-24 17:25:40.754238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.755 [2024-04-24 17:25:40.754245] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.755 [2024-04-24 17:25:40.754251] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.756 [2024-04-24 17:25:40.764760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.756 qpair failed and we were unable to recover it. 00:21:31.756 [2024-04-24 17:25:40.774183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.756 [2024-04-24 17:25:40.774220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.756 [2024-04-24 17:25:40.774234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.756 [2024-04-24 17:25:40.774241] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.756 [2024-04-24 17:25:40.774247] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.756 [2024-04-24 17:25:40.784639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.756 qpair failed and we were unable to recover it. 00:21:31.756 [2024-04-24 17:25:40.794323] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.756 [2024-04-24 17:25:40.794362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.756 [2024-04-24 17:25:40.794376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.756 [2024-04-24 17:25:40.794383] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.756 [2024-04-24 17:25:40.794390] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.756 [2024-04-24 17:25:40.804336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.756 qpair failed and we were unable to recover it. 00:21:31.756 [2024-04-24 17:25:40.814201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.756 [2024-04-24 17:25:40.814241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.756 [2024-04-24 17:25:40.814256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.756 [2024-04-24 17:25:40.814263] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.756 [2024-04-24 17:25:40.814269] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.756 [2024-04-24 17:25:40.824497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.756 qpair failed and we were unable to recover it. 00:21:31.756 [2024-04-24 17:25:40.834342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.756 [2024-04-24 17:25:40.834383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.756 [2024-04-24 17:25:40.834397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.756 [2024-04-24 17:25:40.834404] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.756 [2024-04-24 17:25:40.834410] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.756 [2024-04-24 17:25:40.844528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.756 qpair failed and we were unable to recover it. 00:21:31.756 [2024-04-24 17:25:40.854364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.756 [2024-04-24 17:25:40.854398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.756 [2024-04-24 17:25:40.854413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.756 [2024-04-24 17:25:40.854419] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.756 [2024-04-24 17:25:40.854425] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.756 [2024-04-24 17:25:40.864727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.756 qpair failed and we were unable to recover it. 00:21:31.756 [2024-04-24 17:25:40.874443] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.756 [2024-04-24 17:25:40.874484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.756 [2024-04-24 17:25:40.874498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.756 [2024-04-24 17:25:40.874505] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.756 [2024-04-24 17:25:40.874511] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.756 [2024-04-24 17:25:40.884590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.756 qpair failed and we were unable to recover it. 00:21:31.756 [2024-04-24 17:25:40.894490] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.756 [2024-04-24 17:25:40.894529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.756 [2024-04-24 17:25:40.894543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.756 [2024-04-24 17:25:40.894550] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.756 [2024-04-24 17:25:40.894556] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.756 [2024-04-24 17:25:40.904798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.756 qpair failed and we were unable to recover it. 00:21:31.756 [2024-04-24 17:25:40.914504] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.756 [2024-04-24 17:25:40.914538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.756 [2024-04-24 17:25:40.914555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.756 [2024-04-24 17:25:40.914562] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.756 [2024-04-24 17:25:40.914568] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.756 [2024-04-24 17:25:40.924879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.756 qpair failed and we were unable to recover it. 00:21:31.756 [2024-04-24 17:25:40.934596] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.756 [2024-04-24 17:25:40.934631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.756 [2024-04-24 17:25:40.934646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.756 [2024-04-24 17:25:40.934653] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.756 [2024-04-24 17:25:40.934659] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.756 [2024-04-24 17:25:40.944797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.756 qpair failed and we were unable to recover it. 00:21:31.756 [2024-04-24 17:25:40.954799] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.756 [2024-04-24 17:25:40.954844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.756 [2024-04-24 17:25:40.954859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.756 [2024-04-24 17:25:40.954867] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.756 [2024-04-24 17:25:40.954873] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.756 [2024-04-24 17:25:40.965060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.756 qpair failed and we were unable to recover it. 00:21:31.756 [2024-04-24 17:25:40.974620] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.756 [2024-04-24 17:25:40.974668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.756 [2024-04-24 17:25:40.974683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.756 [2024-04-24 17:25:40.974690] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.756 [2024-04-24 17:25:40.974696] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:31.756 [2024-04-24 17:25:40.985084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.756 qpair failed and we were unable to recover it. 00:21:31.756 [2024-04-24 17:25:40.994811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:31.756 [2024-04-24 17:25:40.994849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:31.756 [2024-04-24 17:25:40.994864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:31.756 [2024-04-24 17:25:40.994871] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:31.756 [2024-04-24 17:25:40.994877] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.014 [2024-04-24 17:25:41.005188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.014 qpair failed and we were unable to recover it. 00:21:32.014 [2024-04-24 17:25:41.014910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.014 [2024-04-24 17:25:41.014954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.014 [2024-04-24 17:25:41.014968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.014 [2024-04-24 17:25:41.014975] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.014 [2024-04-24 17:25:41.014981] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.014 [2024-04-24 17:25:41.025339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.014 qpair failed and we were unable to recover it. 00:21:32.014 [2024-04-24 17:25:41.035014] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.014 [2024-04-24 17:25:41.035057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.014 [2024-04-24 17:25:41.035073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.014 [2024-04-24 17:25:41.035080] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.014 [2024-04-24 17:25:41.035086] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.014 [2024-04-24 17:25:41.045225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.014 qpair failed and we were unable to recover it. 00:21:32.014 [2024-04-24 17:25:41.054994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.014 [2024-04-24 17:25:41.055032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.014 [2024-04-24 17:25:41.055046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.014 [2024-04-24 17:25:41.055054] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.014 [2024-04-24 17:25:41.055060] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.014 [2024-04-24 17:25:41.065356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.014 qpair failed and we were unable to recover it. 00:21:32.014 [2024-04-24 17:25:41.074917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.014 [2024-04-24 17:25:41.074960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.014 [2024-04-24 17:25:41.074974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.014 [2024-04-24 17:25:41.074981] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.014 [2024-04-24 17:25:41.074987] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.014 [2024-04-24 17:25:41.085473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.014 qpair failed and we were unable to recover it. 00:21:32.014 [2024-04-24 17:25:41.095106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.014 [2024-04-24 17:25:41.095149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.014 [2024-04-24 17:25:41.095163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.014 [2024-04-24 17:25:41.095170] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.014 [2024-04-24 17:25:41.095176] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.014 [2024-04-24 17:25:41.105338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.014 qpair failed and we were unable to recover it. 00:21:32.014 [2024-04-24 17:25:41.115142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.014 [2024-04-24 17:25:41.115180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.014 [2024-04-24 17:25:41.115194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.014 [2024-04-24 17:25:41.115201] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.014 [2024-04-24 17:25:41.115207] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.014 [2024-04-24 17:25:41.125525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.014 qpair failed and we were unable to recover it. 00:21:32.014 [2024-04-24 17:25:41.135093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.015 [2024-04-24 17:25:41.135130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.015 [2024-04-24 17:25:41.135144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.015 [2024-04-24 17:25:41.135151] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.015 [2024-04-24 17:25:41.135157] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.015 [2024-04-24 17:25:41.145600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.015 qpair failed and we were unable to recover it. 00:21:32.015 [2024-04-24 17:25:41.155243] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.015 [2024-04-24 17:25:41.155282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.015 [2024-04-24 17:25:41.155297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.015 [2024-04-24 17:25:41.155304] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.015 [2024-04-24 17:25:41.155310] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.015 [2024-04-24 17:25:41.165538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.015 qpair failed and we were unable to recover it. 00:21:32.015 [2024-04-24 17:25:41.175307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.015 [2024-04-24 17:25:41.175341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.015 [2024-04-24 17:25:41.175357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.015 [2024-04-24 17:25:41.175367] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.015 [2024-04-24 17:25:41.175373] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.015 [2024-04-24 17:25:41.185681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.015 qpair failed and we were unable to recover it. 00:21:32.015 [2024-04-24 17:25:41.195369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.015 [2024-04-24 17:25:41.195411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.015 [2024-04-24 17:25:41.195425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.015 [2024-04-24 17:25:41.195433] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.015 [2024-04-24 17:25:41.195439] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.015 [2024-04-24 17:25:41.205699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.015 qpair failed and we were unable to recover it. 00:21:32.015 [2024-04-24 17:25:41.215424] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.015 [2024-04-24 17:25:41.215468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.015 [2024-04-24 17:25:41.215482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.015 [2024-04-24 17:25:41.215489] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.015 [2024-04-24 17:25:41.215495] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.015 [2024-04-24 17:25:41.225856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.015 qpair failed and we were unable to recover it. 00:21:32.015 [2024-04-24 17:25:41.235468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.015 [2024-04-24 17:25:41.235507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.015 [2024-04-24 17:25:41.235521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.015 [2024-04-24 17:25:41.235528] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.015 [2024-04-24 17:25:41.235534] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.015 [2024-04-24 17:25:41.245739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.015 qpair failed and we were unable to recover it. 00:21:32.015 [2024-04-24 17:25:41.255526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.015 [2024-04-24 17:25:41.255566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.015 [2024-04-24 17:25:41.255580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.015 [2024-04-24 17:25:41.255588] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.015 [2024-04-24 17:25:41.255594] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.273 [2024-04-24 17:25:41.265990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.273 qpair failed and we were unable to recover it. 00:21:32.273 [2024-04-24 17:25:41.275629] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.273 [2024-04-24 17:25:41.275671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.273 [2024-04-24 17:25:41.275685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.273 [2024-04-24 17:25:41.275692] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.273 [2024-04-24 17:25:41.275698] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.273 [2024-04-24 17:25:41.285920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.273 qpair failed and we were unable to recover it. 00:21:32.273 [2024-04-24 17:25:41.295698] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.273 [2024-04-24 17:25:41.295737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.273 [2024-04-24 17:25:41.295752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.273 [2024-04-24 17:25:41.295758] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.273 [2024-04-24 17:25:41.295765] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.273 [2024-04-24 17:25:41.306090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.273 qpair failed and we were unable to recover it. 00:21:32.273 [2024-04-24 17:25:41.315685] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.273 [2024-04-24 17:25:41.315722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.273 [2024-04-24 17:25:41.315736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.273 [2024-04-24 17:25:41.315743] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.273 [2024-04-24 17:25:41.315748] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.273 [2024-04-24 17:25:41.326193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.273 qpair failed and we were unable to recover it. 00:21:32.273 [2024-04-24 17:25:41.335817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.273 [2024-04-24 17:25:41.335859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.273 [2024-04-24 17:25:41.335873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.273 [2024-04-24 17:25:41.335880] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.273 [2024-04-24 17:25:41.335886] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.273 [2024-04-24 17:25:41.346057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.273 qpair failed and we were unable to recover it. 00:21:32.273 [2024-04-24 17:25:41.355800] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.273 [2024-04-24 17:25:41.355843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.273 [2024-04-24 17:25:41.355860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.273 [2024-04-24 17:25:41.355867] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.273 [2024-04-24 17:25:41.355874] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.273 [2024-04-24 17:25:41.366257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.273 qpair failed and we were unable to recover it. 00:21:32.273 [2024-04-24 17:25:41.375922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.273 [2024-04-24 17:25:41.375959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.273 [2024-04-24 17:25:41.375974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.273 [2024-04-24 17:25:41.375980] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.273 [2024-04-24 17:25:41.375987] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.273 [2024-04-24 17:25:41.386293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.273 qpair failed and we were unable to recover it. 00:21:32.273 [2024-04-24 17:25:41.396002] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.273 [2024-04-24 17:25:41.396043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.273 [2024-04-24 17:25:41.396058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.273 [2024-04-24 17:25:41.396065] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.273 [2024-04-24 17:25:41.396071] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.273 [2024-04-24 17:25:41.406486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.273 qpair failed and we were unable to recover it. 00:21:32.273 [2024-04-24 17:25:41.416006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.273 [2024-04-24 17:25:41.416050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.273 [2024-04-24 17:25:41.416064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.273 [2024-04-24 17:25:41.416071] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.273 [2024-04-24 17:25:41.416078] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.273 [2024-04-24 17:25:41.426440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.273 qpair failed and we were unable to recover it. 00:21:32.273 [2024-04-24 17:25:41.436204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.273 [2024-04-24 17:25:41.436241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.273 [2024-04-24 17:25:41.436256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.273 [2024-04-24 17:25:41.436263] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.273 [2024-04-24 17:25:41.436269] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.273 [2024-04-24 17:25:41.446432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.273 qpair failed and we were unable to recover it. 00:21:32.273 [2024-04-24 17:25:41.456122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.273 [2024-04-24 17:25:41.456159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.273 [2024-04-24 17:25:41.456173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.273 [2024-04-24 17:25:41.456180] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.273 [2024-04-24 17:25:41.456187] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.273 [2024-04-24 17:25:41.466525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.273 qpair failed and we were unable to recover it. 00:21:32.273 [2024-04-24 17:25:41.476228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.273 [2024-04-24 17:25:41.476265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.273 [2024-04-24 17:25:41.476279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.273 [2024-04-24 17:25:41.476286] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.273 [2024-04-24 17:25:41.476292] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.273 [2024-04-24 17:25:41.486581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.273 qpair failed and we were unable to recover it. 00:21:32.273 [2024-04-24 17:25:41.496294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.273 [2024-04-24 17:25:41.496330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.273 [2024-04-24 17:25:41.496344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.273 [2024-04-24 17:25:41.496350] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.273 [2024-04-24 17:25:41.496356] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.273 [2024-04-24 17:25:41.506688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.273 qpair failed and we were unable to recover it. 00:21:32.273 [2024-04-24 17:25:41.516411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.273 [2024-04-24 17:25:41.516448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.274 [2024-04-24 17:25:41.516463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.274 [2024-04-24 17:25:41.516469] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.274 [2024-04-24 17:25:41.516476] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.531 [2024-04-24 17:25:41.526968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.531 qpair failed and we were unable to recover it. 00:21:32.531 [2024-04-24 17:25:41.536485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.531 [2024-04-24 17:25:41.536530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.531 [2024-04-24 17:25:41.536545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.531 [2024-04-24 17:25:41.536552] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.531 [2024-04-24 17:25:41.536559] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.531 [2024-04-24 17:25:41.546869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.531 qpair failed and we were unable to recover it. 00:21:32.531 [2024-04-24 17:25:41.556501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.531 [2024-04-24 17:25:41.556535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.531 [2024-04-24 17:25:41.556550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.531 [2024-04-24 17:25:41.556556] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.531 [2024-04-24 17:25:41.556563] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.531 [2024-04-24 17:25:41.567021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.531 qpair failed and we were unable to recover it. 00:21:32.531 [2024-04-24 17:25:41.576590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.531 [2024-04-24 17:25:41.576630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.531 [2024-04-24 17:25:41.576644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.531 [2024-04-24 17:25:41.576652] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.531 [2024-04-24 17:25:41.576658] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.531 [2024-04-24 17:25:41.586873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.531 qpair failed and we were unable to recover it. 00:21:32.531 [2024-04-24 17:25:41.596652] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.531 [2024-04-24 17:25:41.596695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.531 [2024-04-24 17:25:41.596709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.531 [2024-04-24 17:25:41.596716] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.531 [2024-04-24 17:25:41.596722] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.531 [2024-04-24 17:25:41.607012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.531 qpair failed and we were unable to recover it. 00:21:32.531 [2024-04-24 17:25:41.616699] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.531 [2024-04-24 17:25:41.616741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.531 [2024-04-24 17:25:41.616755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.531 [2024-04-24 17:25:41.616766] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.531 [2024-04-24 17:25:41.616772] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.531 [2024-04-24 17:25:41.627046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.531 qpair failed and we were unable to recover it. 00:21:32.531 [2024-04-24 17:25:41.636715] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.531 [2024-04-24 17:25:41.636755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.531 [2024-04-24 17:25:41.636769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.531 [2024-04-24 17:25:41.636776] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.531 [2024-04-24 17:25:41.636783] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.531 [2024-04-24 17:25:41.647117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.531 qpair failed and we were unable to recover it. 00:21:32.531 [2024-04-24 17:25:41.656791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.531 [2024-04-24 17:25:41.656832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.531 [2024-04-24 17:25:41.656847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.531 [2024-04-24 17:25:41.656855] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.531 [2024-04-24 17:25:41.656861] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.531 [2024-04-24 17:25:41.667057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.531 qpair failed and we were unable to recover it. 00:21:32.532 [2024-04-24 17:25:41.676871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.532 [2024-04-24 17:25:41.676912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.532 [2024-04-24 17:25:41.676926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.532 [2024-04-24 17:25:41.676933] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.532 [2024-04-24 17:25:41.676940] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.532 [2024-04-24 17:25:41.687355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.532 qpair failed and we were unable to recover it. 00:21:32.532 [2024-04-24 17:25:41.696999] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.532 [2024-04-24 17:25:41.697044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.532 [2024-04-24 17:25:41.697059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.532 [2024-04-24 17:25:41.697066] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.532 [2024-04-24 17:25:41.697072] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.532 [2024-04-24 17:25:41.707287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.532 qpair failed and we were unable to recover it. 00:21:32.532 [2024-04-24 17:25:41.716962] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.532 [2024-04-24 17:25:41.717000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.532 [2024-04-24 17:25:41.717015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.532 [2024-04-24 17:25:41.717022] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.532 [2024-04-24 17:25:41.717028] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.532 [2024-04-24 17:25:41.727605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.532 qpair failed and we were unable to recover it. 00:21:32.532 [2024-04-24 17:25:41.737103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.532 [2024-04-24 17:25:41.737142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.532 [2024-04-24 17:25:41.737156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.532 [2024-04-24 17:25:41.737163] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.532 [2024-04-24 17:25:41.737169] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.532 [2024-04-24 17:25:41.747660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.532 qpair failed and we were unable to recover it. 00:21:32.532 [2024-04-24 17:25:41.757152] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.532 [2024-04-24 17:25:41.757192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.532 [2024-04-24 17:25:41.757207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.532 [2024-04-24 17:25:41.757214] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.532 [2024-04-24 17:25:41.757221] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.532 [2024-04-24 17:25:41.767769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.532 qpair failed and we were unable to recover it. 00:21:32.532 [2024-04-24 17:25:41.777318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.532 [2024-04-24 17:25:41.777361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.532 [2024-04-24 17:25:41.777375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.532 [2024-04-24 17:25:41.777382] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.532 [2024-04-24 17:25:41.777388] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.789 [2024-04-24 17:25:41.787858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.789 qpair failed and we were unable to recover it. 00:21:32.789 [2024-04-24 17:25:41.797372] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.789 [2024-04-24 17:25:41.797414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.789 [2024-04-24 17:25:41.797432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.789 [2024-04-24 17:25:41.797439] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.789 [2024-04-24 17:25:41.797445] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.789 [2024-04-24 17:25:41.807812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.789 qpair failed and we were unable to recover it. 00:21:32.789 [2024-04-24 17:25:41.817263] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.790 [2024-04-24 17:25:41.817297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.790 [2024-04-24 17:25:41.817311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.790 [2024-04-24 17:25:41.817318] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.790 [2024-04-24 17:25:41.817324] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.790 [2024-04-24 17:25:41.827728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.790 qpair failed and we were unable to recover it. 00:21:32.790 [2024-04-24 17:25:41.837378] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.790 [2024-04-24 17:25:41.837416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.790 [2024-04-24 17:25:41.837429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.790 [2024-04-24 17:25:41.837436] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.790 [2024-04-24 17:25:41.837443] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.790 [2024-04-24 17:25:41.847830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.790 qpair failed and we were unable to recover it. 00:21:32.790 [2024-04-24 17:25:41.857375] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.790 [2024-04-24 17:25:41.857420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.790 [2024-04-24 17:25:41.857435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.790 [2024-04-24 17:25:41.857443] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.790 [2024-04-24 17:25:41.857449] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.790 [2024-04-24 17:25:41.867969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.790 qpair failed and we were unable to recover it. 00:21:32.790 [2024-04-24 17:25:41.877543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.790 [2024-04-24 17:25:41.877582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.790 [2024-04-24 17:25:41.877596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.790 [2024-04-24 17:25:41.877603] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.790 [2024-04-24 17:25:41.877610] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.790 [2024-04-24 17:25:41.887868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.790 qpair failed and we were unable to recover it. 00:21:32.790 [2024-04-24 17:25:41.897540] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.790 [2024-04-24 17:25:41.897572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.790 [2024-04-24 17:25:41.897587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.790 [2024-04-24 17:25:41.897594] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.790 [2024-04-24 17:25:41.897600] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.790 [2024-04-24 17:25:41.908146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.790 qpair failed and we were unable to recover it. 00:21:32.790 [2024-04-24 17:25:41.917608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.790 [2024-04-24 17:25:41.917647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.790 [2024-04-24 17:25:41.917662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.790 [2024-04-24 17:25:41.917669] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.790 [2024-04-24 17:25:41.917676] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.790 [2024-04-24 17:25:41.928123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.790 qpair failed and we were unable to recover it. 00:21:32.790 [2024-04-24 17:25:41.937616] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.790 [2024-04-24 17:25:41.937653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.790 [2024-04-24 17:25:41.937670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.790 [2024-04-24 17:25:41.937677] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.790 [2024-04-24 17:25:41.937683] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.790 [2024-04-24 17:25:41.948223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.790 qpair failed and we were unable to recover it. 00:21:32.790 [2024-04-24 17:25:41.957808] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.790 [2024-04-24 17:25:41.957847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.790 [2024-04-24 17:25:41.957862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.790 [2024-04-24 17:25:41.957869] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.790 [2024-04-24 17:25:41.957875] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.790 [2024-04-24 17:25:41.968103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.790 qpair failed and we were unable to recover it. 00:21:32.790 [2024-04-24 17:25:41.977766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.790 [2024-04-24 17:25:41.977800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.790 [2024-04-24 17:25:41.977814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.790 [2024-04-24 17:25:41.977821] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.790 [2024-04-24 17:25:41.977832] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.790 [2024-04-24 17:25:41.988295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.790 qpair failed and we were unable to recover it. 00:21:32.790 [2024-04-24 17:25:41.997889] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.790 [2024-04-24 17:25:41.997930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.790 [2024-04-24 17:25:41.997944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.790 [2024-04-24 17:25:41.997951] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.790 [2024-04-24 17:25:41.997958] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.790 [2024-04-24 17:25:42.008439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.790 qpair failed and we were unable to recover it. 00:21:32.790 [2024-04-24 17:25:42.017922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:32.790 [2024-04-24 17:25:42.017964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:32.790 [2024-04-24 17:25:42.017978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:32.790 [2024-04-24 17:25:42.017985] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:32.790 [2024-04-24 17:25:42.017991] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:32.790 [2024-04-24 17:25:42.028482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:32.790 qpair failed and we were unable to recover it. 00:21:33.048 [2024-04-24 17:25:42.037920] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.048 [2024-04-24 17:25:42.037968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.048 [2024-04-24 17:25:42.037982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.048 [2024-04-24 17:25:42.037989] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.048 [2024-04-24 17:25:42.037995] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.048 [2024-04-24 17:25:42.048396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.048 qpair failed and we were unable to recover it. 00:21:33.048 [2024-04-24 17:25:42.057950] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.048 [2024-04-24 17:25:42.057994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.048 [2024-04-24 17:25:42.058008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.048 [2024-04-24 17:25:42.058018] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.048 [2024-04-24 17:25:42.058024] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.048 [2024-04-24 17:25:42.068378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.048 qpair failed and we were unable to recover it. 00:21:33.048 [2024-04-24 17:25:42.078071] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.048 [2024-04-24 17:25:42.078110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.048 [2024-04-24 17:25:42.078124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.048 [2024-04-24 17:25:42.078131] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.048 [2024-04-24 17:25:42.078137] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.048 [2024-04-24 17:25:42.088611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.048 qpair failed and we were unable to recover it. 00:21:33.048 [2024-04-24 17:25:42.098156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.048 [2024-04-24 17:25:42.098198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.048 [2024-04-24 17:25:42.098212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.048 [2024-04-24 17:25:42.098219] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.048 [2024-04-24 17:25:42.098225] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.048 [2024-04-24 17:25:42.108483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.048 qpair failed and we were unable to recover it. 00:21:33.048 [2024-04-24 17:25:42.118132] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.049 [2024-04-24 17:25:42.118171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.049 [2024-04-24 17:25:42.118186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.049 [2024-04-24 17:25:42.118193] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.049 [2024-04-24 17:25:42.118200] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.049 [2024-04-24 17:25:42.128627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.049 qpair failed and we were unable to recover it. 00:21:33.049 [2024-04-24 17:25:42.138251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.049 [2024-04-24 17:25:42.138288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.049 [2024-04-24 17:25:42.138303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.049 [2024-04-24 17:25:42.138310] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.049 [2024-04-24 17:25:42.138316] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.049 [2024-04-24 17:25:42.148665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.049 qpair failed and we were unable to recover it. 00:21:33.049 [2024-04-24 17:25:42.158362] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.049 [2024-04-24 17:25:42.158405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.049 [2024-04-24 17:25:42.158420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.049 [2024-04-24 17:25:42.158427] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.049 [2024-04-24 17:25:42.158433] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.049 [2024-04-24 17:25:42.168718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.049 qpair failed and we were unable to recover it. 00:21:33.049 [2024-04-24 17:25:42.178310] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.049 [2024-04-24 17:25:42.178351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.049 [2024-04-24 17:25:42.178365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.049 [2024-04-24 17:25:42.178372] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.049 [2024-04-24 17:25:42.178378] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.049 [2024-04-24 17:25:42.188763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.049 qpair failed and we were unable to recover it. 00:21:33.049 [2024-04-24 17:25:42.198551] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.049 [2024-04-24 17:25:42.198587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.049 [2024-04-24 17:25:42.198602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.049 [2024-04-24 17:25:42.198609] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.049 [2024-04-24 17:25:42.198615] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.049 [2024-04-24 17:25:42.208950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.049 qpair failed and we were unable to recover it. 00:21:33.049 [2024-04-24 17:25:42.218487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.049 [2024-04-24 17:25:42.218527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.049 [2024-04-24 17:25:42.218542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.049 [2024-04-24 17:25:42.218549] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.049 [2024-04-24 17:25:42.218555] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.049 [2024-04-24 17:25:42.228909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.049 qpair failed and we were unable to recover it. 00:21:33.049 [2024-04-24 17:25:42.238623] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.049 [2024-04-24 17:25:42.238660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.049 [2024-04-24 17:25:42.238677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.049 [2024-04-24 17:25:42.238684] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.049 [2024-04-24 17:25:42.238690] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.049 [2024-04-24 17:25:42.248967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.049 qpair failed and we were unable to recover it. 00:21:33.049 [2024-04-24 17:25:42.258745] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.049 [2024-04-24 17:25:42.258791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.049 [2024-04-24 17:25:42.258805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.049 [2024-04-24 17:25:42.258813] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.049 [2024-04-24 17:25:42.258819] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.049 [2024-04-24 17:25:42.269020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.049 qpair failed and we were unable to recover it. 00:21:33.049 [2024-04-24 17:25:42.278611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.049 [2024-04-24 17:25:42.278647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.049 [2024-04-24 17:25:42.278661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.049 [2024-04-24 17:25:42.278668] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.049 [2024-04-24 17:25:42.278674] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.049 [2024-04-24 17:25:42.289104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.049 qpair failed and we were unable to recover it. 00:21:33.307 [2024-04-24 17:25:42.298833] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.307 [2024-04-24 17:25:42.298874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.307 [2024-04-24 17:25:42.298889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.307 [2024-04-24 17:25:42.298896] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.307 [2024-04-24 17:25:42.298903] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.307 [2024-04-24 17:25:42.309428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.307 qpair failed and we were unable to recover it. 00:21:33.307 [2024-04-24 17:25:42.318795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.307 [2024-04-24 17:25:42.318841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.307 [2024-04-24 17:25:42.318855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.307 [2024-04-24 17:25:42.318862] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.307 [2024-04-24 17:25:42.318868] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.307 [2024-04-24 17:25:42.329069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.307 qpair failed and we were unable to recover it. 00:21:33.307 [2024-04-24 17:25:42.338829] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.307 [2024-04-24 17:25:42.338867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.307 [2024-04-24 17:25:42.338881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.307 [2024-04-24 17:25:42.338888] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.307 [2024-04-24 17:25:42.338894] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.307 [2024-04-24 17:25:42.349274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.307 qpair failed and we were unable to recover it. 00:21:33.307 [2024-04-24 17:25:42.359026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.307 [2024-04-24 17:25:42.359065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.307 [2024-04-24 17:25:42.359082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.307 [2024-04-24 17:25:42.359089] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.307 [2024-04-24 17:25:42.359095] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.307 [2024-04-24 17:25:42.369231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.307 qpair failed and we were unable to recover it. 00:21:33.307 [2024-04-24 17:25:42.378952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.307 [2024-04-24 17:25:42.378992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.307 [2024-04-24 17:25:42.379007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.307 [2024-04-24 17:25:42.379014] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.307 [2024-04-24 17:25:42.379020] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.307 [2024-04-24 17:25:42.389372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.307 qpair failed and we were unable to recover it. 00:21:33.307 [2024-04-24 17:25:42.399104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.307 [2024-04-24 17:25:42.399142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.307 [2024-04-24 17:25:42.399156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.307 [2024-04-24 17:25:42.399163] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.307 [2024-04-24 17:25:42.399169] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.307 [2024-04-24 17:25:42.409476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.307 qpair failed and we were unable to recover it. 00:21:33.307 [2024-04-24 17:25:42.419091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.307 [2024-04-24 17:25:42.419136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.307 [2024-04-24 17:25:42.419150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.307 [2024-04-24 17:25:42.419157] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.307 [2024-04-24 17:25:42.419164] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.307 [2024-04-24 17:25:42.429669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.307 qpair failed and we were unable to recover it. 00:21:33.307 [2024-04-24 17:25:42.439190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.307 [2024-04-24 17:25:42.439232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.307 [2024-04-24 17:25:42.439246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.307 [2024-04-24 17:25:42.439253] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.307 [2024-04-24 17:25:42.439259] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.307 [2024-04-24 17:25:42.449726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.307 qpair failed and we were unable to recover it. 00:21:33.307 [2024-04-24 17:25:42.459371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.307 [2024-04-24 17:25:42.459411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.307 [2024-04-24 17:25:42.459425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.307 [2024-04-24 17:25:42.459432] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.307 [2024-04-24 17:25:42.459438] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.307 [2024-04-24 17:25:42.469557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.307 qpair failed and we were unable to recover it. 00:21:33.307 [2024-04-24 17:25:42.479271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.307 [2024-04-24 17:25:42.479313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.307 [2024-04-24 17:25:42.479327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.307 [2024-04-24 17:25:42.479334] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.307 [2024-04-24 17:25:42.479340] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.307 [2024-04-24 17:25:42.489756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.307 qpair failed and we were unable to recover it. 00:21:33.307 [2024-04-24 17:25:42.499373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.307 [2024-04-24 17:25:42.499418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.307 [2024-04-24 17:25:42.499432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.307 [2024-04-24 17:25:42.499444] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.307 [2024-04-24 17:25:42.499450] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.307 [2024-04-24 17:25:42.509852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.307 qpair failed and we were unable to recover it. 00:21:33.307 [2024-04-24 17:25:42.519539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.307 [2024-04-24 17:25:42.519580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.307 [2024-04-24 17:25:42.519594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.307 [2024-04-24 17:25:42.519601] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.307 [2024-04-24 17:25:42.519607] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.308 [2024-04-24 17:25:42.529885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.308 qpair failed and we were unable to recover it. 00:21:33.308 [2024-04-24 17:25:42.539487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.308 [2024-04-24 17:25:42.539522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.308 [2024-04-24 17:25:42.539536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.308 [2024-04-24 17:25:42.539543] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.308 [2024-04-24 17:25:42.539549] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.308 [2024-04-24 17:25:42.549889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.308 qpair failed and we were unable to recover it. 00:21:33.565 [2024-04-24 17:25:42.559595] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.565 [2024-04-24 17:25:42.559639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.565 [2024-04-24 17:25:42.559654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.565 [2024-04-24 17:25:42.559660] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.565 [2024-04-24 17:25:42.559667] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.565 [2024-04-24 17:25:42.569962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.565 qpair failed and we were unable to recover it. 00:21:33.565 [2024-04-24 17:25:42.579517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.565 [2024-04-24 17:25:42.579556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.565 [2024-04-24 17:25:42.579571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.565 [2024-04-24 17:25:42.579578] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.565 [2024-04-24 17:25:42.579584] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.565 [2024-04-24 17:25:42.590109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.565 qpair failed and we were unable to recover it. 00:21:33.565 [2024-04-24 17:25:42.599686] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.565 [2024-04-24 17:25:42.599723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.565 [2024-04-24 17:25:42.599738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.565 [2024-04-24 17:25:42.599745] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.565 [2024-04-24 17:25:42.599751] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.565 [2024-04-24 17:25:42.610238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.565 qpair failed and we were unable to recover it. 00:21:33.565 [2024-04-24 17:25:42.619924] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.565 [2024-04-24 17:25:42.619961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.565 [2024-04-24 17:25:42.619975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.565 [2024-04-24 17:25:42.619982] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.565 [2024-04-24 17:25:42.619988] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.565 [2024-04-24 17:25:42.630227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.565 qpair failed and we were unable to recover it. 00:21:33.565 [2024-04-24 17:25:42.639728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.565 [2024-04-24 17:25:42.639768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.565 [2024-04-24 17:25:42.639782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.565 [2024-04-24 17:25:42.639789] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.565 [2024-04-24 17:25:42.639795] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.565 [2024-04-24 17:25:42.650192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.565 qpair failed and we were unable to recover it. 00:21:33.565 [2024-04-24 17:25:42.659983] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.565 [2024-04-24 17:25:42.660026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.565 [2024-04-24 17:25:42.660041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.565 [2024-04-24 17:25:42.660048] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.566 [2024-04-24 17:25:42.660054] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.566 [2024-04-24 17:25:42.670132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.566 qpair failed and we were unable to recover it. 00:21:33.566 [2024-04-24 17:25:42.679744] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.566 [2024-04-24 17:25:42.679779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.566 [2024-04-24 17:25:42.679795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.566 [2024-04-24 17:25:42.679802] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.566 [2024-04-24 17:25:42.679808] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.566 [2024-04-24 17:25:42.690350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.566 qpair failed and we were unable to recover it. 00:21:33.566 [2024-04-24 17:25:42.699918] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.566 [2024-04-24 17:25:42.699956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.566 [2024-04-24 17:25:42.699971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.566 [2024-04-24 17:25:42.699978] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.566 [2024-04-24 17:25:42.699984] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.566 [2024-04-24 17:25:42.710297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.566 qpair failed and we were unable to recover it. 00:21:33.566 [2024-04-24 17:25:42.719865] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.566 [2024-04-24 17:25:42.719908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.566 [2024-04-24 17:25:42.719922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.566 [2024-04-24 17:25:42.719929] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.566 [2024-04-24 17:25:42.719936] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.566 [2024-04-24 17:25:42.730394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.566 qpair failed and we were unable to recover it. 00:21:33.566 [2024-04-24 17:25:42.739972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.566 [2024-04-24 17:25:42.740013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.566 [2024-04-24 17:25:42.740027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.566 [2024-04-24 17:25:42.740034] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.566 [2024-04-24 17:25:42.740040] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.566 [2024-04-24 17:25:42.750426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.566 qpair failed and we were unable to recover it. 00:21:33.566 [2024-04-24 17:25:42.760038] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.566 [2024-04-24 17:25:42.760077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.566 [2024-04-24 17:25:42.760092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.566 [2024-04-24 17:25:42.760099] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.566 [2024-04-24 17:25:42.760105] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.566 [2024-04-24 17:25:42.770400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.566 qpair failed and we were unable to recover it. 00:21:33.566 [2024-04-24 17:25:42.780039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.566 [2024-04-24 17:25:42.780077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.566 [2024-04-24 17:25:42.780092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.566 [2024-04-24 17:25:42.780098] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.566 [2024-04-24 17:25:42.780105] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.566 [2024-04-24 17:25:42.790390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.566 qpair failed and we were unable to recover it. 00:21:33.566 [2024-04-24 17:25:42.800050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.566 [2024-04-24 17:25:42.800092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.566 [2024-04-24 17:25:42.800106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.566 [2024-04-24 17:25:42.800113] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.566 [2024-04-24 17:25:42.800119] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.566 [2024-04-24 17:25:42.810604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.566 qpair failed and we were unable to recover it. 00:21:33.823 [2024-04-24 17:25:42.820162] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.823 [2024-04-24 17:25:42.820209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.823 [2024-04-24 17:25:42.820223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.823 [2024-04-24 17:25:42.820230] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.823 [2024-04-24 17:25:42.820237] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.823 [2024-04-24 17:25:42.830447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.823 qpair failed and we were unable to recover it. 00:21:33.823 [2024-04-24 17:25:42.840302] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.823 [2024-04-24 17:25:42.840340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.823 [2024-04-24 17:25:42.840354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.823 [2024-04-24 17:25:42.840362] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.823 [2024-04-24 17:25:42.840368] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.823 [2024-04-24 17:25:42.850554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.823 qpair failed and we were unable to recover it. 00:21:33.823 [2024-04-24 17:25:42.860358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.823 [2024-04-24 17:25:42.860396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.823 [2024-04-24 17:25:42.860410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.823 [2024-04-24 17:25:42.860418] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.823 [2024-04-24 17:25:42.860424] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.823 [2024-04-24 17:25:42.870735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.823 qpair failed and we were unable to recover it. 00:21:33.823 [2024-04-24 17:25:42.880275] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.823 [2024-04-24 17:25:42.880314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.823 [2024-04-24 17:25:42.880329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.823 [2024-04-24 17:25:42.880337] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.823 [2024-04-24 17:25:42.880343] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.823 [2024-04-24 17:25:42.890731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.823 qpair failed and we were unable to recover it. 00:21:33.823 [2024-04-24 17:25:42.900470] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.823 [2024-04-24 17:25:42.900513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.823 [2024-04-24 17:25:42.900527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.823 [2024-04-24 17:25:42.900534] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.823 [2024-04-24 17:25:42.900540] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.823 [2024-04-24 17:25:42.910847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.823 qpair failed and we were unable to recover it. 00:21:33.823 [2024-04-24 17:25:42.920478] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.823 [2024-04-24 17:25:42.920519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.823 [2024-04-24 17:25:42.920533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.823 [2024-04-24 17:25:42.920540] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.823 [2024-04-24 17:25:42.920546] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.823 [2024-04-24 17:25:42.930898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.823 qpair failed and we were unable to recover it. 00:21:33.823 [2024-04-24 17:25:42.940446] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.823 [2024-04-24 17:25:42.940478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.823 [2024-04-24 17:25:42.940493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.823 [2024-04-24 17:25:42.940503] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.823 [2024-04-24 17:25:42.940508] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.823 [2024-04-24 17:25:42.950864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.823 qpair failed and we were unable to recover it. 00:21:33.823 [2024-04-24 17:25:42.960614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.823 [2024-04-24 17:25:42.960654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.823 [2024-04-24 17:25:42.960668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.823 [2024-04-24 17:25:42.960675] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.823 [2024-04-24 17:25:42.960682] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.823 [2024-04-24 17:25:42.970945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.823 qpair failed and we were unable to recover it. 00:21:33.823 [2024-04-24 17:25:42.980634] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.823 [2024-04-24 17:25:42.980674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.823 [2024-04-24 17:25:42.980688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.823 [2024-04-24 17:25:42.980695] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.823 [2024-04-24 17:25:42.980701] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.823 [2024-04-24 17:25:42.990949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.823 qpair failed and we were unable to recover it. 00:21:33.823 [2024-04-24 17:25:43.000705] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.823 [2024-04-24 17:25:43.000749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.823 [2024-04-24 17:25:43.000763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.823 [2024-04-24 17:25:43.000770] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.823 [2024-04-24 17:25:43.000777] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.823 [2024-04-24 17:25:43.011122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.823 qpair failed and we were unable to recover it. 00:21:33.823 [2024-04-24 17:25:43.020795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.823 [2024-04-24 17:25:43.020838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.823 [2024-04-24 17:25:43.020852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.823 [2024-04-24 17:25:43.020859] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.823 [2024-04-24 17:25:43.020866] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.823 [2024-04-24 17:25:43.031178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.823 qpair failed and we were unable to recover it. 00:21:33.823 [2024-04-24 17:25:43.040930] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.823 [2024-04-24 17:25:43.040967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.823 [2024-04-24 17:25:43.040982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.823 [2024-04-24 17:25:43.040989] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.823 [2024-04-24 17:25:43.040995] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:33.823 [2024-04-24 17:25:43.051183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:33.823 qpair failed and we were unable to recover it. 00:21:33.823 [2024-04-24 17:25:43.060841] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:33.823 [2024-04-24 17:25:43.060884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:33.823 [2024-04-24 17:25:43.060899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:33.823 [2024-04-24 17:25:43.060905] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:33.823 [2024-04-24 17:25:43.060911] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.081 [2024-04-24 17:25:43.071215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.081 qpair failed and we were unable to recover it. 00:21:34.081 [2024-04-24 17:25:43.080992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.081 [2024-04-24 17:25:43.081044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.081 [2024-04-24 17:25:43.081059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.081 [2024-04-24 17:25:43.081066] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.081 [2024-04-24 17:25:43.081072] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.081 [2024-04-24 17:25:43.091311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.081 qpair failed and we were unable to recover it. 00:21:34.081 [2024-04-24 17:25:43.101051] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.081 [2024-04-24 17:25:43.101091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.081 [2024-04-24 17:25:43.101105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.081 [2024-04-24 17:25:43.101112] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.081 [2024-04-24 17:25:43.101118] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.081 [2024-04-24 17:25:43.111334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.081 qpair failed and we were unable to recover it. 00:21:34.081 [2024-04-24 17:25:43.121068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.081 [2024-04-24 17:25:43.121108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.081 [2024-04-24 17:25:43.121126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.081 [2024-04-24 17:25:43.121133] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.081 [2024-04-24 17:25:43.121139] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.081 [2024-04-24 17:25:43.131441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.081 qpair failed and we were unable to recover it. 00:21:34.081 [2024-04-24 17:25:43.141153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.081 [2024-04-24 17:25:43.141195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.081 [2024-04-24 17:25:43.141209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.081 [2024-04-24 17:25:43.141216] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.081 [2024-04-24 17:25:43.141222] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.081 [2024-04-24 17:25:43.151555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.081 qpair failed and we were unable to recover it. 00:21:34.081 [2024-04-24 17:25:43.161221] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.081 [2024-04-24 17:25:43.161260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.081 [2024-04-24 17:25:43.161274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.081 [2024-04-24 17:25:43.161282] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.081 [2024-04-24 17:25:43.161288] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.081 [2024-04-24 17:25:43.171493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.081 qpair failed and we were unable to recover it. 00:21:34.081 [2024-04-24 17:25:43.181292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.081 [2024-04-24 17:25:43.181327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.081 [2024-04-24 17:25:43.181342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.081 [2024-04-24 17:25:43.181349] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.081 [2024-04-24 17:25:43.181355] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.081 [2024-04-24 17:25:43.191712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.081 qpair failed and we were unable to recover it. 00:21:34.081 [2024-04-24 17:25:43.201224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.081 [2024-04-24 17:25:43.201264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.081 [2024-04-24 17:25:43.201280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.081 [2024-04-24 17:25:43.201287] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.081 [2024-04-24 17:25:43.201293] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.081 [2024-04-24 17:25:43.211782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.081 qpair failed and we were unable to recover it. 00:21:34.081 [2024-04-24 17:25:43.221289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.081 [2024-04-24 17:25:43.221329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.081 [2024-04-24 17:25:43.221343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.081 [2024-04-24 17:25:43.221350] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.081 [2024-04-24 17:25:43.221356] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.081 [2024-04-24 17:25:43.231690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.081 qpair failed and we were unable to recover it. 00:21:34.081 [2024-04-24 17:25:43.241477] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.081 [2024-04-24 17:25:43.241514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.081 [2024-04-24 17:25:43.241528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.081 [2024-04-24 17:25:43.241535] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.081 [2024-04-24 17:25:43.241541] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.081 [2024-04-24 17:25:43.251804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.081 qpair failed and we were unable to recover it. 00:21:34.081 [2024-04-24 17:25:43.261302] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.081 [2024-04-24 17:25:43.261339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.081 [2024-04-24 17:25:43.261354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.081 [2024-04-24 17:25:43.261361] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.081 [2024-04-24 17:25:43.261367] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.081 [2024-04-24 17:25:43.271872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.081 qpair failed and we were unable to recover it. 00:21:34.081 [2024-04-24 17:25:43.281492] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.081 [2024-04-24 17:25:43.281533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.081 [2024-04-24 17:25:43.281547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.081 [2024-04-24 17:25:43.281554] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.081 [2024-04-24 17:25:43.281561] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.081 [2024-04-24 17:25:43.291833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.081 qpair failed and we were unable to recover it. 00:21:34.081 [2024-04-24 17:25:43.301558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.081 [2024-04-24 17:25:43.301609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.081 [2024-04-24 17:25:43.301624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.081 [2024-04-24 17:25:43.301631] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.081 [2024-04-24 17:25:43.301637] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.081 [2024-04-24 17:25:43.312000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.081 qpair failed and we were unable to recover it. 00:21:34.081 [2024-04-24 17:25:43.321625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.081 [2024-04-24 17:25:43.321663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.081 [2024-04-24 17:25:43.321677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.081 [2024-04-24 17:25:43.321684] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.081 [2024-04-24 17:25:43.321690] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.339 [2024-04-24 17:25:43.332012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.339 qpair failed and we were unable to recover it. 00:21:34.339 [2024-04-24 17:25:43.341731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.339 [2024-04-24 17:25:43.341774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.339 [2024-04-24 17:25:43.341788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.339 [2024-04-24 17:25:43.341795] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.339 [2024-04-24 17:25:43.341801] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.339 [2024-04-24 17:25:43.352022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.339 qpair failed and we were unable to recover it. 00:21:34.339 [2024-04-24 17:25:43.361752] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.339 [2024-04-24 17:25:43.361794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.339 [2024-04-24 17:25:43.361808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.339 [2024-04-24 17:25:43.361815] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.339 [2024-04-24 17:25:43.361821] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.339 [2024-04-24 17:25:43.372193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.339 qpair failed and we were unable to recover it. 00:21:34.339 [2024-04-24 17:25:43.381911] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.339 [2024-04-24 17:25:43.381954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.339 [2024-04-24 17:25:43.381969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.339 [2024-04-24 17:25:43.381979] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.339 [2024-04-24 17:25:43.381985] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.339 [2024-04-24 17:25:43.392115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.339 qpair failed and we were unable to recover it. 00:21:34.339 [2024-04-24 17:25:43.402020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.339 [2024-04-24 17:25:43.402058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.339 [2024-04-24 17:25:43.402073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.339 [2024-04-24 17:25:43.402080] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.339 [2024-04-24 17:25:43.402087] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.339 [2024-04-24 17:25:43.412261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.339 qpair failed and we were unable to recover it. 00:21:34.339 [2024-04-24 17:25:43.421989] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.339 [2024-04-24 17:25:43.422029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.339 [2024-04-24 17:25:43.422044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.339 [2024-04-24 17:25:43.422051] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.339 [2024-04-24 17:25:43.422056] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.339 [2024-04-24 17:25:43.432442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.339 qpair failed and we were unable to recover it. 00:21:34.339 [2024-04-24 17:25:43.442000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.339 [2024-04-24 17:25:43.442043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.339 [2024-04-24 17:25:43.442058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.339 [2024-04-24 17:25:43.442065] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.339 [2024-04-24 17:25:43.442071] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.339 [2024-04-24 17:25:43.452486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.339 qpair failed and we were unable to recover it. 00:21:34.339 [2024-04-24 17:25:43.461962] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.339 [2024-04-24 17:25:43.461999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.339 [2024-04-24 17:25:43.462013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.339 [2024-04-24 17:25:43.462020] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.339 [2024-04-24 17:25:43.462026] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.339 [2024-04-24 17:25:43.472370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.339 qpair failed and we were unable to recover it. 00:21:34.339 [2024-04-24 17:25:43.482084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.339 [2024-04-24 17:25:43.482116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.339 [2024-04-24 17:25:43.482131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.339 [2024-04-24 17:25:43.482138] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.339 [2024-04-24 17:25:43.482144] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.339 [2024-04-24 17:25:43.492488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.339 qpair failed and we were unable to recover it. 00:21:34.339 [2024-04-24 17:25:43.502178] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.339 [2024-04-24 17:25:43.502214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.339 [2024-04-24 17:25:43.502228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.339 [2024-04-24 17:25:43.502235] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.339 [2024-04-24 17:25:43.502242] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.340 [2024-04-24 17:25:43.512448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.340 qpair failed and we were unable to recover it. 00:21:34.340 [2024-04-24 17:25:43.522319] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.340 [2024-04-24 17:25:43.522360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.340 [2024-04-24 17:25:43.522374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.340 [2024-04-24 17:25:43.522381] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.340 [2024-04-24 17:25:43.522387] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.340 [2024-04-24 17:25:43.532601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.340 qpair failed and we were unable to recover it. 00:21:34.340 [2024-04-24 17:25:43.542338] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.340 [2024-04-24 17:25:43.542377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.340 [2024-04-24 17:25:43.542392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.340 [2024-04-24 17:25:43.542399] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.340 [2024-04-24 17:25:43.542405] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.340 [2024-04-24 17:25:43.552819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.340 qpair failed and we were unable to recover it. 00:21:34.340 [2024-04-24 17:25:43.562339] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.340 [2024-04-24 17:25:43.562373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.340 [2024-04-24 17:25:43.562392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.340 [2024-04-24 17:25:43.562398] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.340 [2024-04-24 17:25:43.562404] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.340 [2024-04-24 17:25:43.572830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.340 qpair failed and we were unable to recover it. 00:21:34.340 [2024-04-24 17:25:43.582521] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.340 [2024-04-24 17:25:43.582560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.340 [2024-04-24 17:25:43.582574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.340 [2024-04-24 17:25:43.582581] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.340 [2024-04-24 17:25:43.582587] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.597 [2024-04-24 17:25:43.593091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.597 qpair failed and we were unable to recover it. 00:21:34.597 [2024-04-24 17:25:43.602558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.597 [2024-04-24 17:25:43.602601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.597 [2024-04-24 17:25:43.602615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.597 [2024-04-24 17:25:43.602622] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.598 [2024-04-24 17:25:43.602628] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.598 [2024-04-24 17:25:43.612797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.598 qpair failed and we were unable to recover it. 00:21:34.598 [2024-04-24 17:25:43.622533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.598 [2024-04-24 17:25:43.622572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.598 [2024-04-24 17:25:43.622586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.598 [2024-04-24 17:25:43.622593] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.598 [2024-04-24 17:25:43.622599] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.598 [2024-04-24 17:25:43.633075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.598 qpair failed and we were unable to recover it. 00:21:34.598 [2024-04-24 17:25:43.642550] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.598 [2024-04-24 17:25:43.642591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.598 [2024-04-24 17:25:43.642605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.598 [2024-04-24 17:25:43.642612] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.598 [2024-04-24 17:25:43.642618] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.598 [2024-04-24 17:25:43.653122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.598 qpair failed and we were unable to recover it. 00:21:34.598 [2024-04-24 17:25:43.662718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.598 [2024-04-24 17:25:43.662753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.598 [2024-04-24 17:25:43.662767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.598 [2024-04-24 17:25:43.662774] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.598 [2024-04-24 17:25:43.662780] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.598 [2024-04-24 17:25:43.673175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.598 qpair failed and we were unable to recover it. 00:21:34.598 [2024-04-24 17:25:43.682759] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.598 [2024-04-24 17:25:43.682799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.598 [2024-04-24 17:25:43.682812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.598 [2024-04-24 17:25:43.682819] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.598 [2024-04-24 17:25:43.682830] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.598 [2024-04-24 17:25:43.693253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.598 qpair failed and we were unable to recover it. 00:21:34.598 [2024-04-24 17:25:43.702752] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.598 [2024-04-24 17:25:43.702797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.598 [2024-04-24 17:25:43.702811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.598 [2024-04-24 17:25:43.702818] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.598 [2024-04-24 17:25:43.702824] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.598 [2024-04-24 17:25:43.713450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.598 qpair failed and we were unable to recover it. 00:21:34.598 [2024-04-24 17:25:43.722949] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.598 [2024-04-24 17:25:43.722986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.598 [2024-04-24 17:25:43.723001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.598 [2024-04-24 17:25:43.723008] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.598 [2024-04-24 17:25:43.723014] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.598 [2024-04-24 17:25:43.733210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.598 qpair failed and we were unable to recover it. 00:21:34.598 [2024-04-24 17:25:43.742782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.598 [2024-04-24 17:25:43.742829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.598 [2024-04-24 17:25:43.742844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.598 [2024-04-24 17:25:43.742851] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.598 [2024-04-24 17:25:43.742856] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.598 [2024-04-24 17:25:43.753369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.598 qpair failed and we were unable to recover it. 00:21:34.598 [2024-04-24 17:25:43.762986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.598 [2024-04-24 17:25:43.763023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.598 [2024-04-24 17:25:43.763038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.598 [2024-04-24 17:25:43.763045] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.598 [2024-04-24 17:25:43.763051] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.598 [2024-04-24 17:25:43.773596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.598 qpair failed and we were unable to recover it. 00:21:34.598 [2024-04-24 17:25:43.783141] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.598 [2024-04-24 17:25:43.783185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.598 [2024-04-24 17:25:43.783199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.598 [2024-04-24 17:25:43.783206] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.598 [2024-04-24 17:25:43.783212] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.598 [2024-04-24 17:25:43.793603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.598 qpair failed and we were unable to recover it. 00:21:34.598 [2024-04-24 17:25:43.803238] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.598 [2024-04-24 17:25:43.803276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.598 [2024-04-24 17:25:43.803292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.598 [2024-04-24 17:25:43.803299] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.598 [2024-04-24 17:25:43.803305] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.598 [2024-04-24 17:25:43.813607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.598 qpair failed and we were unable to recover it. 00:21:34.598 [2024-04-24 17:25:43.823278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.598 [2024-04-24 17:25:43.823309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.598 [2024-04-24 17:25:43.823323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.598 [2024-04-24 17:25:43.823334] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.598 [2024-04-24 17:25:43.823341] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.598 [2024-04-24 17:25:43.833606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.598 qpair failed and we were unable to recover it. 00:21:34.598 [2024-04-24 17:25:43.843161] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.598 [2024-04-24 17:25:43.843204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.598 [2024-04-24 17:25:43.843219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.598 [2024-04-24 17:25:43.843226] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.598 [2024-04-24 17:25:43.843233] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.855 [2024-04-24 17:25:43.853848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.855 qpair failed and we were unable to recover it. 00:21:34.855 [2024-04-24 17:25:43.863378] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.855 [2024-04-24 17:25:43.863426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.855 [2024-04-24 17:25:43.863440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.855 [2024-04-24 17:25:43.863447] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.855 [2024-04-24 17:25:43.863453] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.855 [2024-04-24 17:25:43.873870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.855 qpair failed and we were unable to recover it. 00:21:34.855 [2024-04-24 17:25:43.883481] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.855 [2024-04-24 17:25:43.883528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.855 [2024-04-24 17:25:43.883542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.855 [2024-04-24 17:25:43.883549] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.855 [2024-04-24 17:25:43.883555] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.855 [2024-04-24 17:25:43.893873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.855 qpair failed and we were unable to recover it. 00:21:34.855 [2024-04-24 17:25:43.903447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.855 [2024-04-24 17:25:43.903485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.855 [2024-04-24 17:25:43.903499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.856 [2024-04-24 17:25:43.903506] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.856 [2024-04-24 17:25:43.903512] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.856 [2024-04-24 17:25:43.913972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.856 qpair failed and we were unable to recover it. 00:21:34.856 [2024-04-24 17:25:43.923577] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:34.856 [2024-04-24 17:25:43.923620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:34.856 [2024-04-24 17:25:43.923635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:34.856 [2024-04-24 17:25:43.923641] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:34.856 [2024-04-24 17:25:43.923647] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:34.856 [2024-04-24 17:25:43.934105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:34.856 qpair failed and we were unable to recover it. 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Write completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 Read completed with error (sct=0, sc=8) 00:21:35.941 starting I/O failed 00:21:35.941 [2024-04-24 17:25:44.939080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:35.941 [2024-04-24 17:25:44.946162] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:35.941 [2024-04-24 17:25:44.946206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:35.941 [2024-04-24 17:25:44.946223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:35.941 [2024-04-24 17:25:44.946230] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:35.941 [2024-04-24 17:25:44.946237] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d15c0 00:21:35.941 [2024-04-24 17:25:44.956683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:35.941 qpair failed and we were unable to recover it. 00:21:35.941 [2024-04-24 17:25:44.966450] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:35.941 [2024-04-24 17:25:44.966488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:35.941 [2024-04-24 17:25:44.966504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:35.941 [2024-04-24 17:25:44.966511] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:35.941 [2024-04-24 17:25:44.966517] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d15c0 00:21:35.941 [2024-04-24 17:25:44.976968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:35.941 qpair failed and we were unable to recover it. 00:21:35.941 [2024-04-24 17:25:44.986550] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:35.941 [2024-04-24 17:25:44.986586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:35.941 [2024-04-24 17:25:44.986605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:35.941 [2024-04-24 17:25:44.986613] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:35.941 [2024-04-24 17:25:44.986619] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:21:35.941 [2024-04-24 17:25:44.996811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:35.941 qpair failed and we were unable to recover it. 00:21:35.941 [2024-04-24 17:25:45.006485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:35.941 [2024-04-24 17:25:45.006525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:35.941 [2024-04-24 17:25:45.006539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:35.941 [2024-04-24 17:25:45.006546] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:35.941 [2024-04-24 17:25:45.006552] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:21:35.941 [2024-04-24 17:25:45.016988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:35.941 qpair failed and we were unable to recover it. 00:21:35.941 [2024-04-24 17:25:45.017111] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:21:35.941 A controller has encountered a failure and is being reset. 00:21:35.941 [2024-04-24 17:25:45.026772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:35.941 [2024-04-24 17:25:45.026822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:35.941 [2024-04-24 17:25:45.026855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:35.941 [2024-04-24 17:25:45.026867] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:35.942 [2024-04-24 17:25:45.026877] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:21:35.942 [2024-04-24 17:25:45.037049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:35.942 qpair failed and we were unable to recover it. 00:21:35.942 [2024-04-24 17:25:45.046780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:35.942 [2024-04-24 17:25:45.046818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:35.942 [2024-04-24 17:25:45.046843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:35.942 [2024-04-24 17:25:45.046850] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:35.942 [2024-04-24 17:25:45.046857] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:21:35.942 [2024-04-24 17:25:45.057132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:35.942 qpair failed and we were unable to recover it. 00:21:35.942 [2024-04-24 17:25:45.057250] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:21:35.942 [2024-04-24 17:25:45.088978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:35.942 Controller properly reset. 00:21:35.942 Initializing NVMe Controllers 00:21:35.942 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:35.942 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:35.942 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:21:35.942 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:21:35.942 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:21:35.942 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:21:35.942 Initialization complete. Launching workers. 00:21:35.942 Starting thread on core 1 00:21:35.942 Starting thread on core 2 00:21:35.942 Starting thread on core 3 00:21:35.942 Starting thread on core 0 00:21:35.942 17:25:45 -- host/target_disconnect.sh@59 -- # sync 00:21:35.942 00:21:35.942 real 0m12.528s 00:21:35.942 user 0m27.907s 00:21:35.942 sys 0m2.249s 00:21:35.942 17:25:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:35.942 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:21:35.942 ************************************ 00:21:35.942 END TEST nvmf_target_disconnect_tc2 00:21:35.942 ************************************ 00:21:35.942 17:25:45 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:21:35.942 17:25:45 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:21:35.942 17:25:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:35.942 17:25:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:35.942 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:21:36.197 ************************************ 00:21:36.197 START TEST nvmf_target_disconnect_tc3 00:21:36.197 ************************************ 00:21:36.197 17:25:45 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc3 00:21:36.197 17:25:45 -- host/target_disconnect.sh@65 -- # reconnectpid=3060819 00:21:36.197 17:25:45 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:21:36.197 17:25:45 -- host/target_disconnect.sh@67 -- # sleep 2 00:21:36.197 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.088 17:25:47 -- host/target_disconnect.sh@68 -- # kill -9 3060676 00:21:38.088 17:25:47 -- host/target_disconnect.sh@70 -- # sleep 2 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Write completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Write completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Write completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Write completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Write completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Write completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Write completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Write completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Write completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Write completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Write completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Write completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Write completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Write completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 Read completed with error (sct=0, sc=8) 00:21:39.461 starting I/O failed 00:21:39.461 [2024-04-24 17:25:48.443674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:40.393 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 3060676 Killed "${NVMF_APP[@]}" "$@" 00:21:40.393 17:25:49 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:21:40.393 17:25:49 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:40.393 17:25:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:40.393 17:25:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:40.393 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:21:40.393 17:25:49 -- nvmf/common.sh@470 -- # nvmfpid=3060876 00:21:40.393 17:25:49 -- nvmf/common.sh@471 -- # waitforlisten 3060876 00:21:40.393 17:25:49 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:40.393 17:25:49 -- common/autotest_common.sh@817 -- # '[' -z 3060876 ']' 00:21:40.393 17:25:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.393 17:25:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:40.393 17:25:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.393 17:25:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:40.393 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:21:40.393 [2024-04-24 17:25:49.340929] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:21:40.393 [2024-04-24 17:25:49.340975] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.393 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.393 [2024-04-24 17:25:49.410061] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.393 Read completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Write completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Write completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Write completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Write completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Write completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Read completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Write completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Write completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Write completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Read completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Write completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Read completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Read completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Read completed with error (sct=0, sc=8) 00:21:40.393 starting I/O failed 00:21:40.393 Read completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Read completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Read completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Read completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Read completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Write completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Write completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Write completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Read completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Read completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Write completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Read completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Write completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Read completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Write completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Read completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 Read completed with error (sct=0, sc=8) 00:21:40.394 starting I/O failed 00:21:40.394 [2024-04-24 17:25:49.448616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:40.394 [2024-04-24 17:25:49.450206] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:40.394 [2024-04-24 17:25:49.450224] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:40.394 [2024-04-24 17:25:49.450231] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:21:40.394 [2024-04-24 17:25:49.478498] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.394 [2024-04-24 17:25:49.478531] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.394 [2024-04-24 17:25:49.478538] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.394 [2024-04-24 17:25:49.478544] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.394 [2024-04-24 17:25:49.478549] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.394 [2024-04-24 17:25:49.478659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:40.394 [2024-04-24 17:25:49.478769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:40.394 [2024-04-24 17:25:49.478875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:40.394 [2024-04-24 17:25:49.478876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:40.967 17:25:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:40.967 17:25:50 -- common/autotest_common.sh@850 -- # return 0 00:21:40.967 17:25:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:40.967 17:25:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:40.967 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:21:40.967 17:25:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.967 17:25:50 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:40.967 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.967 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:21:40.967 Malloc0 00:21:40.967 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.967 17:25:50 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:40.967 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.967 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:21:41.224 [2024-04-24 17:25:50.226435] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22ae000/0x22b9c40) succeed. 00:21:41.224 [2024-04-24 17:25:50.236817] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22af5f0/0x2359cd0) succeed. 00:21:41.224 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.224 17:25:50 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.224 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.224 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:21:41.224 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.225 17:25:50 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:41.225 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.225 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:21:41.225 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.225 17:25:50 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:21:41.225 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.225 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:21:41.225 [2024-04-24 17:25:50.378884] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:21:41.225 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.225 17:25:50 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:21:41.225 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.225 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:21:41.225 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.225 17:25:50 -- host/target_disconnect.sh@73 -- # wait 3060819 00:21:41.225 [2024-04-24 17:25:50.454252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:41.225 qpair failed and we were unable to recover it. 00:21:41.225 [2024-04-24 17:25:50.455761] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:41.225 [2024-04-24 17:25:50.455779] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:41.225 [2024-04-24 17:25:50.455785] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:21:42.592 [2024-04-24 17:25:51.459895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:42.592 qpair failed and we were unable to recover it. 00:21:42.592 [2024-04-24 17:25:51.461424] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:42.592 [2024-04-24 17:25:51.461438] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:42.592 [2024-04-24 17:25:51.461444] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:21:43.522 [2024-04-24 17:25:52.465309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:43.522 qpair failed and we were unable to recover it. 00:21:43.522 [2024-04-24 17:25:52.466753] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:43.522 [2024-04-24 17:25:52.466768] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:43.522 [2024-04-24 17:25:52.466774] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:21:44.450 [2024-04-24 17:25:53.470559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:44.450 qpair failed and we were unable to recover it. 00:21:44.450 [2024-04-24 17:25:53.471989] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:44.450 [2024-04-24 17:25:53.472004] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:44.450 [2024-04-24 17:25:53.472010] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:21:45.380 [2024-04-24 17:25:54.475817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:45.380 qpair failed and we were unable to recover it. 00:21:45.380 [2024-04-24 17:25:54.477216] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:45.380 [2024-04-24 17:25:54.477231] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:45.380 [2024-04-24 17:25:54.477237] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:21:46.310 [2024-04-24 17:25:55.481013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:46.310 qpair failed and we were unable to recover it. 00:21:46.310 [2024-04-24 17:25:55.482541] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:46.310 [2024-04-24 17:25:55.482556] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:46.310 [2024-04-24 17:25:55.482562] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:21:47.265 [2024-04-24 17:25:56.486263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:47.265 qpair failed and we were unable to recover it. 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Read completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 Write completed with error (sct=0, sc=8) 00:21:48.633 starting I/O failed 00:21:48.633 [2024-04-24 17:25:57.491300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.560 Read completed with error (sct=0, sc=8) 00:21:49.560 starting I/O failed 00:21:49.560 Read completed with error (sct=0, sc=8) 00:21:49.560 starting I/O failed 00:21:49.560 Write completed with error (sct=0, sc=8) 00:21:49.560 starting I/O failed 00:21:49.560 Write completed with error (sct=0, sc=8) 00:21:49.560 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Write completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Write completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Write completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Write completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Write completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Write completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Write completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Write completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Write completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Write completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Write completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Write completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Write completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 Read completed with error (sct=0, sc=8) 00:21:49.561 starting I/O failed 00:21:49.561 [2024-04-24 17:25:58.496179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:49.561 [2024-04-24 17:25:58.497750] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:49.561 [2024-04-24 17:25:58.497766] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:49.561 [2024-04-24 17:25:58.497772] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:21:50.489 [2024-04-24 17:25:59.501543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:50.489 qpair failed and we were unable to recover it. 00:21:50.489 [2024-04-24 17:25:59.503003] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:50.489 [2024-04-24 17:25:59.503019] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:50.489 [2024-04-24 17:25:59.503025] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:21:51.417 [2024-04-24 17:26:00.506920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:51.417 qpair failed and we were unable to recover it. 00:21:51.417 [2024-04-24 17:26:00.508495] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:51.417 [2024-04-24 17:26:00.508516] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:51.417 [2024-04-24 17:26:00.508522] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:52.345 [2024-04-24 17:26:01.512448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:52.345 qpair failed and we were unable to recover it. 00:21:52.345 [2024-04-24 17:26:01.514140] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:52.345 [2024-04-24 17:26:01.514155] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:52.345 [2024-04-24 17:26:01.514162] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:21:53.272 [2024-04-24 17:26:02.518088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:53.272 qpair failed and we were unable to recover it. 00:21:53.272 [2024-04-24 17:26:02.518218] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:21:53.272 A controller has encountered a failure and is being reset. 00:21:53.272 Resorting to new failover address 192.168.100.9 00:21:53.272 [2024-04-24 17:26:02.519884] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:53.272 [2024-04-24 17:26:02.519913] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:53.272 [2024-04-24 17:26:02.519928] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:21:54.639 [2024-04-24 17:26:03.523933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:54.639 qpair failed and we were unable to recover it. 00:21:54.639 [2024-04-24 17:26:03.525417] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:54.639 [2024-04-24 17:26:03.525432] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:54.639 [2024-04-24 17:26:03.525438] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:21:55.574 [2024-04-24 17:26:04.529258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-24 17:26:04.529353] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:55.574 [2024-04-24 17:26:04.529446] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:21:55.574 [2024-04-24 17:26:04.559407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:55.574 Controller properly reset. 00:21:55.574 Initializing NVMe Controllers 00:21:55.574 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:55.574 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:55.574 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:21:55.574 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:21:55.574 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:21:55.574 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:21:55.574 Initialization complete. Launching workers. 00:21:55.574 Starting thread on core 1 00:21:55.574 Starting thread on core 2 00:21:55.574 Starting thread on core 3 00:21:55.574 Starting thread on core 0 00:21:55.574 17:26:04 -- host/target_disconnect.sh@74 -- # sync 00:21:55.574 00:21:55.574 real 0m19.326s 00:21:55.574 user 1m5.926s 00:21:55.574 sys 0m4.341s 00:21:55.574 17:26:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:55.574 17:26:04 -- common/autotest_common.sh@10 -- # set +x 00:21:55.574 ************************************ 00:21:55.574 END TEST nvmf_target_disconnect_tc3 00:21:55.574 ************************************ 00:21:55.574 17:26:04 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:55.574 17:26:04 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:21:55.574 17:26:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:55.574 17:26:04 -- nvmf/common.sh@117 -- # sync 00:21:55.574 17:26:04 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:55.574 17:26:04 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:55.574 17:26:04 -- nvmf/common.sh@120 -- # set +e 00:21:55.574 17:26:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:55.574 17:26:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:55.574 rmmod nvme_rdma 00:21:55.574 rmmod nvme_fabrics 00:21:55.574 17:26:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:55.574 17:26:04 -- nvmf/common.sh@124 -- # set -e 00:21:55.574 17:26:04 -- nvmf/common.sh@125 -- # return 0 00:21:55.574 17:26:04 -- nvmf/common.sh@478 -- # '[' -n 3060876 ']' 00:21:55.574 17:26:04 -- nvmf/common.sh@479 -- # killprocess 3060876 00:21:55.574 17:26:04 -- common/autotest_common.sh@936 -- # '[' -z 3060876 ']' 00:21:55.574 17:26:04 -- common/autotest_common.sh@940 -- # kill -0 3060876 00:21:55.574 17:26:04 -- common/autotest_common.sh@941 -- # uname 00:21:55.574 17:26:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:55.574 17:26:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3060876 00:21:55.574 17:26:04 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:21:55.574 17:26:04 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:21:55.574 17:26:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3060876' 00:21:55.574 killing process with pid 3060876 00:21:55.574 17:26:04 -- common/autotest_common.sh@955 -- # kill 3060876 00:21:55.574 17:26:04 -- common/autotest_common.sh@960 -- # wait 3060876 00:21:55.874 17:26:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:55.874 17:26:05 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:55.874 00:21:55.874 real 0m39.205s 00:21:55.874 user 2m38.164s 00:21:55.874 sys 0m11.242s 00:21:55.874 17:26:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:55.874 17:26:05 -- common/autotest_common.sh@10 -- # set +x 00:21:55.874 ************************************ 00:21:55.874 END TEST nvmf_target_disconnect 00:21:55.874 ************************************ 00:21:55.874 17:26:05 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:21:55.874 17:26:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:55.874 17:26:05 -- common/autotest_common.sh@10 -- # set +x 00:21:55.874 17:26:05 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:21:55.874 00:21:55.874 real 15m25.196s 00:21:55.874 user 41m38.800s 00:21:55.874 sys 4m49.084s 00:21:55.874 17:26:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:55.874 17:26:05 -- common/autotest_common.sh@10 -- # set +x 00:21:55.874 ************************************ 00:21:55.874 END TEST nvmf_rdma 00:21:55.874 ************************************ 00:21:56.132 17:26:05 -- spdk/autotest.sh@283 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:21:56.132 17:26:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:56.132 17:26:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:56.132 17:26:05 -- common/autotest_common.sh@10 -- # set +x 00:21:56.132 ************************************ 00:21:56.132 START TEST spdkcli_nvmf_rdma 00:21:56.132 ************************************ 00:21:56.132 17:26:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:21:56.132 * Looking for test storage... 00:21:56.390 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:21:56.390 17:26:05 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:21:56.390 17:26:05 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:21:56.390 17:26:05 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:21:56.390 17:26:05 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.390 17:26:05 -- nvmf/common.sh@7 -- # uname -s 00:21:56.390 17:26:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.390 17:26:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.390 17:26:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.390 17:26:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.390 17:26:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.390 17:26:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.390 17:26:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.390 17:26:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.390 17:26:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.390 17:26:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.390 17:26:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:21:56.390 17:26:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:21:56.390 17:26:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.390 17:26:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.390 17:26:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.390 17:26:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.390 17:26:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:56.390 17:26:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.390 17:26:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.390 17:26:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.390 17:26:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.390 17:26:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.390 17:26:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.390 17:26:05 -- paths/export.sh@5 -- # export PATH 00:21:56.390 17:26:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.390 17:26:05 -- nvmf/common.sh@47 -- # : 0 00:21:56.390 17:26:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:56.390 17:26:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:56.390 17:26:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.390 17:26:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.390 17:26:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.390 17:26:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:56.390 17:26:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:56.390 17:26:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:56.390 17:26:05 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:56.390 17:26:05 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:56.390 17:26:05 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:56.390 17:26:05 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:56.390 17:26:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:56.391 17:26:05 -- common/autotest_common.sh@10 -- # set +x 00:21:56.391 17:26:05 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:56.391 17:26:05 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3061208 00:21:56.391 17:26:05 -- spdkcli/common.sh@34 -- # waitforlisten 3061208 00:21:56.391 17:26:05 -- common/autotest_common.sh@817 -- # '[' -z 3061208 ']' 00:21:56.391 17:26:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.391 17:26:05 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:56.391 17:26:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:56.391 17:26:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.391 17:26:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:56.391 17:26:05 -- common/autotest_common.sh@10 -- # set +x 00:21:56.391 [2024-04-24 17:26:05.466021] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:21:56.391 [2024-04-24 17:26:05.466065] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061208 ] 00:21:56.391 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.391 [2024-04-24 17:26:05.518768] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:56.391 [2024-04-24 17:26:05.588436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.391 [2024-04-24 17:26:05.588438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.324 17:26:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:57.324 17:26:06 -- common/autotest_common.sh@850 -- # return 0 00:21:57.324 17:26:06 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:57.324 17:26:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:57.324 17:26:06 -- common/autotest_common.sh@10 -- # set +x 00:21:57.324 17:26:06 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:57.324 17:26:06 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:21:57.324 17:26:06 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:21:57.324 17:26:06 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:21:57.324 17:26:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.324 17:26:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:57.324 17:26:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:57.324 17:26:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:57.324 17:26:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.324 17:26:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:57.324 17:26:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.324 17:26:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:57.324 17:26:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:57.324 17:26:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.324 17:26:06 -- common/autotest_common.sh@10 -- # set +x 00:22:02.589 17:26:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:02.589 17:26:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.589 17:26:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.589 17:26:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.589 17:26:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.589 17:26:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.589 17:26:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.589 17:26:11 -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.589 17:26:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.589 17:26:11 -- nvmf/common.sh@296 -- # e810=() 00:22:02.589 17:26:11 -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.589 17:26:11 -- nvmf/common.sh@297 -- # x722=() 00:22:02.589 17:26:11 -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.589 17:26:11 -- nvmf/common.sh@298 -- # mlx=() 00:22:02.589 17:26:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.589 17:26:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.589 17:26:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.589 17:26:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.589 17:26:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.589 17:26:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.589 17:26:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.589 17:26:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.589 17:26:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.589 17:26:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.589 17:26:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.589 17:26:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.589 17:26:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.589 17:26:11 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:02.589 17:26:11 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:02.589 17:26:11 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:02.589 17:26:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.589 17:26:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.589 17:26:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:02.589 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:02.589 17:26:11 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:02.589 17:26:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.589 17:26:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:02.589 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:02.589 17:26:11 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:02.589 17:26:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.589 17:26:11 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.589 17:26:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.589 17:26:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:02.589 17:26:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.589 17:26:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:02.589 Found net devices under 0000:da:00.0: mlx_0_0 00:22:02.589 17:26:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.589 17:26:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.589 17:26:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.589 17:26:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:02.589 17:26:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.589 17:26:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:02.589 Found net devices under 0000:da:00.1: mlx_0_1 00:22:02.589 17:26:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.589 17:26:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:02.589 17:26:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:02.589 17:26:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@409 -- # rdma_device_init 00:22:02.589 17:26:11 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:22:02.589 17:26:11 -- nvmf/common.sh@58 -- # uname 00:22:02.589 17:26:11 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:02.589 17:26:11 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:02.589 17:26:11 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:02.589 17:26:11 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:02.589 17:26:11 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:02.589 17:26:11 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:02.589 17:26:11 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:02.589 17:26:11 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:02.589 17:26:11 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:22:02.589 17:26:11 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:02.589 17:26:11 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:02.589 17:26:11 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:02.589 17:26:11 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:02.589 17:26:11 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:02.589 17:26:11 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:02.589 17:26:11 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:02.589 17:26:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.589 17:26:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.589 17:26:11 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:02.589 17:26:11 -- nvmf/common.sh@105 -- # continue 2 00:22:02.589 17:26:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.589 17:26:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.589 17:26:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.589 17:26:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:02.589 17:26:11 -- nvmf/common.sh@105 -- # continue 2 00:22:02.589 17:26:11 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:02.589 17:26:11 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:02.589 17:26:11 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:02.589 17:26:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:02.589 17:26:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.589 17:26:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.589 17:26:11 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:02.589 17:26:11 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:02.589 17:26:11 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:02.589 434: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:02.589 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:02.590 altname enp218s0f0np0 00:22:02.590 altname ens818f0np0 00:22:02.590 inet 192.168.100.8/24 scope global mlx_0_0 00:22:02.590 valid_lft forever preferred_lft forever 00:22:02.590 17:26:11 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:02.590 17:26:11 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:02.590 17:26:11 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:02.590 17:26:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:02.590 17:26:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.590 17:26:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.590 17:26:11 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:02.590 17:26:11 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:02.590 17:26:11 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:02.590 435: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:02.590 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:02.590 altname enp218s0f1np1 00:22:02.590 altname ens818f1np1 00:22:02.590 inet 192.168.100.9/24 scope global mlx_0_1 00:22:02.590 valid_lft forever preferred_lft forever 00:22:02.590 17:26:11 -- nvmf/common.sh@411 -- # return 0 00:22:02.590 17:26:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:02.590 17:26:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:02.590 17:26:11 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:22:02.590 17:26:11 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:22:02.590 17:26:11 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:02.590 17:26:11 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:02.590 17:26:11 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:02.590 17:26:11 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:02.590 17:26:11 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:02.590 17:26:11 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:02.590 17:26:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.590 17:26:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.590 17:26:11 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:02.590 17:26:11 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:02.590 17:26:11 -- nvmf/common.sh@105 -- # continue 2 00:22:02.590 17:26:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.590 17:26:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.590 17:26:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:02.590 17:26:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.590 17:26:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:02.590 17:26:11 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:02.590 17:26:11 -- nvmf/common.sh@105 -- # continue 2 00:22:02.590 17:26:11 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:02.590 17:26:11 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:02.590 17:26:11 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:02.590 17:26:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:02.590 17:26:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.590 17:26:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.590 17:26:11 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:02.590 17:26:11 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:02.590 17:26:11 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:02.590 17:26:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:02.590 17:26:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.590 17:26:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.590 17:26:11 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:22:02.590 192.168.100.9' 00:22:02.590 17:26:11 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:02.590 192.168.100.9' 00:22:02.590 17:26:11 -- nvmf/common.sh@446 -- # head -n 1 00:22:02.590 17:26:11 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:02.590 17:26:11 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:22:02.590 192.168.100.9' 00:22:02.590 17:26:11 -- nvmf/common.sh@447 -- # tail -n +2 00:22:02.590 17:26:11 -- nvmf/common.sh@447 -- # head -n 1 00:22:02.590 17:26:11 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:02.590 17:26:11 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:22:02.590 17:26:11 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:02.590 17:26:11 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:22:02.590 17:26:11 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:22:02.590 17:26:11 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:22:02.590 17:26:11 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:22:02.590 17:26:11 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:22:02.590 17:26:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:02.590 17:26:11 -- common/autotest_common.sh@10 -- # set +x 00:22:02.590 17:26:11 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:02.590 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:02.590 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:22:02.590 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:22:02.590 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:22:02.590 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:22:02.590 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:22:02.590 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:22:02.590 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:22:02.590 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:22:02.590 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:22:02.590 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:22:02.590 ' 00:22:02.849 [2024-04-24 17:26:12.086978] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:05.375 [2024-04-24 17:26:14.143375] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10a1750/0xf2b700) succeed. 00:22:05.375 [2024-04-24 17:26:14.155324] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10a1c40/0x10167c0) succeed. 00:22:06.306 [2024-04-24 17:26:15.385256] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:22:08.833 [2024-04-24 17:26:17.547982] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:22:10.204 [2024-04-24 17:26:19.405935] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:22:12.103 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:12.103 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:12.103 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:12.103 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:12.103 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:12.103 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:12.103 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:12.103 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:22:12.103 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:22:12.103 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:12.103 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:12.103 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:12.103 17:26:20 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:12.103 17:26:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:12.103 17:26:20 -- common/autotest_common.sh@10 -- # set +x 00:22:12.103 17:26:20 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:12.103 17:26:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:12.103 17:26:20 -- common/autotest_common.sh@10 -- # set +x 00:22:12.103 17:26:20 -- spdkcli/nvmf.sh@69 -- # check_match 00:22:12.103 17:26:20 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:22:12.103 17:26:21 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:12.361 17:26:21 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:12.361 17:26:21 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:12.361 17:26:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:12.361 17:26:21 -- common/autotest_common.sh@10 -- # set +x 00:22:12.361 17:26:21 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:12.361 17:26:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:12.361 17:26:21 -- common/autotest_common.sh@10 -- # set +x 00:22:12.361 17:26:21 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:12.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:12.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:12.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:12.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:22:12.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:22:12.361 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:12.361 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:12.361 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:12.361 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:12.361 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:12.361 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:12.361 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:12.361 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:12.361 ' 00:22:17.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:17.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:17.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:17.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:17.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:22:17.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:22:17.623 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:17.623 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:17.623 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:17.623 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:17.623 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:17.623 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:17.623 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:17.623 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:17.623 17:26:26 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:17.623 17:26:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:17.623 17:26:26 -- common/autotest_common.sh@10 -- # set +x 00:22:17.623 17:26:26 -- spdkcli/nvmf.sh@90 -- # killprocess 3061208 00:22:17.623 17:26:26 -- common/autotest_common.sh@936 -- # '[' -z 3061208 ']' 00:22:17.623 17:26:26 -- common/autotest_common.sh@940 -- # kill -0 3061208 00:22:17.623 17:26:26 -- common/autotest_common.sh@941 -- # uname 00:22:17.623 17:26:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:17.623 17:26:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3061208 00:22:17.623 17:26:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:17.623 17:26:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:17.623 17:26:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3061208' 00:22:17.623 killing process with pid 3061208 00:22:17.623 17:26:26 -- common/autotest_common.sh@955 -- # kill 3061208 00:22:17.623 [2024-04-24 17:26:26.432998] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:17.623 17:26:26 -- common/autotest_common.sh@960 -- # wait 3061208 00:22:17.623 17:26:26 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:22:17.623 17:26:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:17.623 17:26:26 -- nvmf/common.sh@117 -- # sync 00:22:17.623 17:26:26 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:17.623 17:26:26 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:17.623 17:26:26 -- nvmf/common.sh@120 -- # set +e 00:22:17.623 17:26:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:17.623 17:26:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:17.623 rmmod nvme_rdma 00:22:17.623 rmmod nvme_fabrics 00:22:17.623 17:26:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:17.623 17:26:26 -- nvmf/common.sh@124 -- # set -e 00:22:17.623 17:26:26 -- nvmf/common.sh@125 -- # return 0 00:22:17.623 17:26:26 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:22:17.623 17:26:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:17.623 17:26:26 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:22:17.623 00:22:17.623 real 0m21.442s 00:22:17.623 user 0m45.267s 00:22:17.623 sys 0m4.890s 00:22:17.623 17:26:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:17.623 17:26:26 -- common/autotest_common.sh@10 -- # set +x 00:22:17.623 ************************************ 00:22:17.623 END TEST spdkcli_nvmf_rdma 00:22:17.623 ************************************ 00:22:17.623 17:26:26 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:22:17.623 17:26:26 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:22:17.623 17:26:26 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:22:17.623 17:26:26 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:17.623 17:26:26 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:22:17.623 17:26:26 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:17.623 17:26:26 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:22:17.623 17:26:26 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:22:17.623 17:26:26 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:22:17.623 17:26:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:22:17.623 17:26:26 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:22:17.623 17:26:26 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:22:17.623 17:26:26 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:22:17.623 17:26:26 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:22:17.623 17:26:26 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:22:17.623 17:26:26 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:22:17.623 17:26:26 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:22:17.623 17:26:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:17.623 17:26:26 -- common/autotest_common.sh@10 -- # set +x 00:22:17.623 17:26:26 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:22:17.623 17:26:26 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:22:17.623 17:26:26 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:22:17.623 17:26:26 -- common/autotest_common.sh@10 -- # set +x 00:22:21.800 INFO: APP EXITING 00:22:21.800 INFO: killing all VMs 00:22:21.800 INFO: killing vhost app 00:22:21.800 INFO: EXIT DONE 00:22:24.333 Waiting for block devices as requested 00:22:24.333 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:22:24.333 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:24.333 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:24.593 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:24.593 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:24.593 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:24.593 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:24.852 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:24.852 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:24.852 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:25.111 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:25.111 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:25.111 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:25.111 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:25.371 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:25.371 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:25.371 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:27.902 Cleaning 00:22:27.902 Removing: /var/run/dpdk/spdk0/config 00:22:27.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:27.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:27.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:27.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:27.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:22:27.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:22:27.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:22:27.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:22:27.902 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:27.902 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:27.902 Removing: /var/run/dpdk/spdk1/config 00:22:27.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:27.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:27.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:27.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:27.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:22:27.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:22:27.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:22:27.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:22:27.902 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:27.902 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:27.902 Removing: /var/run/dpdk/spdk1/mp_socket 00:22:27.902 Removing: /var/run/dpdk/spdk2/config 00:22:27.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:27.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:27.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:27.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:27.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:22:27.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:22:27.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:22:27.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:22:27.902 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:27.902 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:27.902 Removing: /var/run/dpdk/spdk3/config 00:22:27.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:27.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:27.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:27.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:27.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:22:27.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:22:27.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:22:27.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:22:27.902 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:27.902 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:27.902 Removing: /var/run/dpdk/spdk4/config 00:22:27.902 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:27.902 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:27.902 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:27.902 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:27.902 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:22:27.902 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:22:27.902 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:22:27.902 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:22:27.902 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:27.902 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:27.902 Removing: /dev/shm/bdevperf_trace.pid2992891 00:22:27.902 Removing: /dev/shm/bdevperf_trace.pid3028125 00:22:27.902 Removing: /dev/shm/bdev_svc_trace.1 00:22:28.162 Removing: /dev/shm/nvmf_trace.0 00:22:28.162 Removing: /dev/shm/spdk_tgt_trace.pid2920764 00:22:28.162 Removing: /var/run/dpdk/spdk0 00:22:28.162 Removing: /var/run/dpdk/spdk1 00:22:28.162 Removing: /var/run/dpdk/spdk2 00:22:28.162 Removing: /var/run/dpdk/spdk3 00:22:28.162 Removing: /var/run/dpdk/spdk4 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2918198 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2919470 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2920764 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2921513 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2922461 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2922707 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2923696 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2923920 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2924278 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2928814 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2930419 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2931026 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2931388 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2931817 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2932223 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2932483 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2932742 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2933030 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2934009 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2937013 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2937285 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2937556 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2937788 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2938280 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2938401 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2938830 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2939032 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2939299 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2939533 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2939795 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2939812 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2940378 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2940639 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2940940 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2941225 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2941465 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2941561 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2941888 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2942223 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2942549 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2942801 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2943062 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2943316 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2943579 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2943834 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2944093 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2944350 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2944713 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2945064 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2945341 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2945592 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2945854 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2946109 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2946375 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2946632 00:22:28.162 Removing: /var/run/dpdk/spdk_pid2946898 00:22:28.420 Removing: /var/run/dpdk/spdk_pid2947239 00:22:28.420 Removing: /var/run/dpdk/spdk_pid2947441 00:22:28.420 Removing: /var/run/dpdk/spdk_pid2947763 00:22:28.420 Removing: /var/run/dpdk/spdk_pid2951669 00:22:28.420 Removing: /var/run/dpdk/spdk_pid2975633 00:22:28.420 Removing: /var/run/dpdk/spdk_pid2977945 00:22:28.420 Removing: /var/run/dpdk/spdk_pid2980741 00:22:28.420 Removing: /var/run/dpdk/spdk_pid2983155 00:22:28.421 Removing: /var/run/dpdk/spdk_pid2985420 00:22:28.421 Removing: /var/run/dpdk/spdk_pid2985486 00:22:28.421 Removing: /var/run/dpdk/spdk_pid2992891 00:22:28.421 Removing: /var/run/dpdk/spdk_pid2992930 00:22:28.421 Removing: /var/run/dpdk/spdk_pid2995649 00:22:28.421 Removing: /var/run/dpdk/spdk_pid2998138 00:22:28.421 Removing: /var/run/dpdk/spdk_pid2998422 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3003281 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3015908 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3018232 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3020425 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3027642 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3027869 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3028125 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3030544 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3035059 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3035138 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3035207 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3035281 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3035316 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3037641 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3037646 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3040004 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3040048 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3040093 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3040148 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3040161 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3042519 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3042672 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3045065 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3045299 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3048823 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3048830 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3058065 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3058094 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3060544 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3060623 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3060819 00:22:28.421 Removing: /var/run/dpdk/spdk_pid3061208 00:22:28.421 Clean 00:22:28.678 17:26:37 -- common/autotest_common.sh@1437 -- # return 0 00:22:28.678 17:26:37 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:22:28.678 17:26:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:28.678 17:26:37 -- common/autotest_common.sh@10 -- # set +x 00:22:28.678 17:26:37 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:22:28.678 17:26:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:28.678 17:26:37 -- common/autotest_common.sh@10 -- # set +x 00:22:28.678 17:26:37 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:22:28.678 17:26:37 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:22:28.678 17:26:37 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:22:28.678 17:26:37 -- spdk/autotest.sh@389 -- # hash lcov 00:22:28.678 17:26:37 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:22:28.678 17:26:37 -- spdk/autotest.sh@391 -- # hostname 00:22:28.678 17:26:37 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-05 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:22:28.937 geninfo: WARNING: invalid characters removed from testname! 00:22:47.083 17:26:55 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:48.981 17:26:57 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:50.354 17:26:59 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:52.255 17:27:01 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:53.629 17:27:02 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:55.530 17:27:04 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:56.905 17:27:06 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:56.905 17:27:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:57.163 17:27:06 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:57.163 17:27:06 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.163 17:27:06 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.163 17:27:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.163 17:27:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.163 17:27:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.163 17:27:06 -- paths/export.sh@5 -- $ export PATH 00:22:57.163 17:27:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.163 17:27:06 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:22:57.163 17:27:06 -- common/autobuild_common.sh@435 -- $ date +%s 00:22:57.163 17:27:06 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713972426.XXXXXX 00:22:57.163 17:27:06 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713972426.eAwwc3 00:22:57.163 17:27:06 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:22:57.163 17:27:06 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:22:57.163 17:27:06 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:22:57.163 17:27:06 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:22:57.163 17:27:06 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:22:57.163 17:27:06 -- common/autobuild_common.sh@451 -- $ get_config_params 00:22:57.163 17:27:06 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:22:57.163 17:27:06 -- common/autotest_common.sh@10 -- $ set +x 00:22:57.163 17:27:06 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:22:57.163 17:27:06 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:22:57.163 17:27:06 -- pm/common@17 -- $ local monitor 00:22:57.163 17:27:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:57.163 17:27:06 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3070433 00:22:57.163 17:27:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:57.163 17:27:06 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3070435 00:22:57.163 17:27:06 -- pm/common@21 -- $ date +%s 00:22:57.163 17:27:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:57.163 17:27:06 -- pm/common@21 -- $ date +%s 00:22:57.163 17:27:06 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3070437 00:22:57.163 17:27:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:57.163 17:27:06 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3070441 00:22:57.163 17:27:06 -- pm/common@26 -- $ sleep 1 00:22:57.163 17:27:06 -- pm/common@21 -- $ date +%s 00:22:57.163 17:27:06 -- pm/common@21 -- $ date +%s 00:22:57.163 17:27:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713972426 00:22:57.163 17:27:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713972426 00:22:57.164 17:27:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713972426 00:22:57.164 17:27:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713972426 00:22:57.164 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713972426_collect-cpu-temp.pm.log 00:22:57.164 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713972426_collect-vmstat.pm.log 00:22:57.164 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713972426_collect-cpu-load.pm.log 00:22:57.164 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713972426_collect-bmc-pm.bmc.pm.log 00:22:58.097 17:27:07 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:22:58.097 17:27:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:22:58.097 17:27:07 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:58.097 17:27:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:58.097 17:27:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:58.097 17:27:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:58.097 17:27:07 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:58.097 17:27:07 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:58.097 17:27:07 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:22:58.097 17:27:07 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:58.097 17:27:07 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:58.097 17:27:07 -- pm/common@30 -- $ signal_monitor_resources TERM 00:22:58.097 17:27:07 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:22:58.097 17:27:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:58.097 17:27:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:22:58.097 17:27:07 -- pm/common@45 -- $ pid=3070451 00:22:58.097 17:27:07 -- pm/common@52 -- $ sudo kill -TERM 3070451 00:22:58.097 17:27:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:58.097 17:27:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:22:58.097 17:27:07 -- pm/common@45 -- $ pid=3070452 00:22:58.097 17:27:07 -- pm/common@52 -- $ sudo kill -TERM 3070452 00:22:58.097 17:27:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:58.097 17:27:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:22:58.097 17:27:07 -- pm/common@45 -- $ pid=3070446 00:22:58.097 17:27:07 -- pm/common@52 -- $ sudo kill -TERM 3070446 00:22:58.355 17:27:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:58.355 17:27:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:22:58.355 17:27:07 -- pm/common@45 -- $ pid=3070454 00:22:58.355 17:27:07 -- pm/common@52 -- $ sudo kill -TERM 3070454 00:22:58.355 + [[ -n 2815582 ]] 00:22:58.355 + sudo kill 2815582 00:22:58.363 [Pipeline] } 00:22:58.377 [Pipeline] // stage 00:22:58.381 [Pipeline] } 00:22:58.392 [Pipeline] // timeout 00:22:58.398 [Pipeline] } 00:22:58.412 [Pipeline] // catchError 00:22:58.416 [Pipeline] } 00:22:58.432 [Pipeline] // wrap 00:22:58.437 [Pipeline] } 00:22:58.451 [Pipeline] // catchError 00:22:58.459 [Pipeline] stage 00:22:58.461 [Pipeline] { (Epilogue) 00:22:58.472 [Pipeline] catchError 00:22:58.473 [Pipeline] { 00:22:58.482 [Pipeline] echo 00:22:58.483 Cleanup processes 00:22:58.486 [Pipeline] sh 00:22:58.762 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:58.762 3070567 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:22:58.762 3070853 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:58.779 [Pipeline] sh 00:22:59.061 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:59.061 ++ grep -v 'sudo pgrep' 00:22:59.061 ++ awk '{print $1}' 00:22:59.061 + sudo kill -9 3070567 00:22:59.072 [Pipeline] sh 00:22:59.346 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:05.915 [Pipeline] sh 00:23:06.196 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:06.196 Artifacts sizes are good 00:23:06.208 [Pipeline] archiveArtifacts 00:23:06.214 Archiving artifacts 00:23:06.373 [Pipeline] sh 00:23:06.654 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:23:06.668 [Pipeline] cleanWs 00:23:06.677 [WS-CLEANUP] Deleting project workspace... 00:23:06.677 [WS-CLEANUP] Deferred wipeout is used... 00:23:06.683 [WS-CLEANUP] done 00:23:06.685 [Pipeline] } 00:23:06.706 [Pipeline] // catchError 00:23:06.718 [Pipeline] sh 00:23:07.000 + logger -p user.info -t JENKINS-CI 00:23:07.007 [Pipeline] } 00:23:07.020 [Pipeline] // stage 00:23:07.023 [Pipeline] } 00:23:07.038 [Pipeline] // node 00:23:07.042 [Pipeline] End of Pipeline 00:23:07.075 Finished: SUCCESS